Publications

Displaying 801 - 900 of 959
  • Stewart, A. J., Haigh, M., & Kidd, E. (2009). An investigation into the online processing of counterfactual and indicative conditionals. Quarterly Journal of Experimental Psychology, 62(11), 2113-2125. doi:10.1080/17470210902973106.

    Abstract

    The ability to represent conditional information is central to human cognition. In two self-paced reading experiments we investigated how readers process counterfactual conditionals (e.g., If Darren had been athletic, he could probably have played on the rugby team ) and indicative conditionals (e.g., If Darren is athletic, he probably plays on the rugby team ). In Experiment 1 we focused on how readers process counterfactual conditional sentences. We found that processing of the antecedent of counterfactual conditionals was rapidly constrained by prior context (i.e., knowing whether Darren was or was not athletic). A reading-time penalty was observed for the critical region of text comprising the last word of the antecedent and the first word of the consequent when the information in the antecedent did not fit with prior context. In Experiment 2 we contrasted counterfactual conditionals with indicative conditionals. For counterfactual conditionals we found the same effect on the critical region as we found in Experiment 1. In contrast, however, we found no evidence that processing of the antecedent of indicative conditionals was constrained by prior context. For indicative conditionals (but not for counterfactual conditionals), the results we report are consistent with the suppositional account of conditionals. We propose that current theories of conditionals need to be able to account for online processing differences between indicative and counterfactual conditionals
  • Stivers, T., Enfield, N. J., Brown, P., Englert, C., Hayashi, M., Heinemann, T., Hoymann, G., Rossano, F., De Ruiter, J. P., Yoon, K.-E., & Levinson, S. C. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences of the United States of America, 106 (26), 10587-10592. doi:10.1073/pnas.0903616106.

    Abstract

    Informal verbal interaction is the core matrix for human social life. A mechanism for coordinating this basic mode of interaction is a system of turn-taking that regulates who is to speak and when. Yet relatively little is known about how this system varies across cultures. The anthropological literature reports significant cultural differences in the timing of turn-taking in ordinary conversation. We test these claims and show that in fact there are striking universals in the underlying pattern of response latency in conversation. Using a worldwide sample of 10 languages drawn from traditional indigenous communities to major world languages, we show that all of the languages tested provide clear evidence for a general avoidance of overlapping talk and a minimization of silence between conversational turns. In addition, all of the languages show the same factors explaining within-language variation in speed of response. We do, however, find differences across the languages in the average gap between turns, within a range of 250 ms from the cross-language mean. We believe that a natural sensitivity to these tempo differences leads to a subjective perception of dramatic or even fundamental differences as offered in ethnographic reports of conversational style. Our empirical evidence suggests robust human universals in this domain, where local variations are quantitative only, pointing to a single shared infrastructure for language use with likely ethological foundations.

    Additional information

    Stivers_2009_universals_suppl.pdf
  • Stivers, T., Chalfoun, A., & Rossi, G. (2024). To err is human but to persist is diabolical: Toward a theory of interactional policing. Frontiers in Sociology: Sociological Theory, 9: 1369776. doi:10.3389/fsoc.2024.1369776.

    Abstract

    Social interaction is organized around norms and preferences that guide our construction of actions and our interpretation of those of others, creating a reflexive moral order. Sociological theory suggests two possibilities for the type of moral order that underlies the policing of interactional norm and preference violations: a morality that focuses on the nature of violations themselves and a morality that focuses on the positioning of actors as they maintain their conduct comprehensible, even when they depart from norms and preferences. We find that actors are more likely to reproach interactional violations for which an account is not provided by the transgressor, and that actors weakly reproach or let pass first offenses while more strongly policing violators who persist in bad behavior. Based on these findings, we outline a theory of interactional policing that rests not on the nature of the violation but rather on actors' moral positioning.
  • Sutcliffe, D. J., Dinasarapu, A. R., Visser, J. E., Den Hoed, J., Seifar, F., Joshi, P., Ceballos-Picot, I., Sardar, T., Hess, E. J., Sun, Y. V., Wen, Z., Zwick, M. E., & Jinnah, H. A. (2021). Induced pluripotent stem cells from subjects with Lesch-Nyhan disease. Scientific Reports, 11: 8523. doi:10.1038/s41598-021-87955-9.

    Abstract

    Lesch-Nyhan disease (LND) is an inherited disorder caused by pathogenic variants in the HPRT1 gene, which encodes the purine recycling enzyme hypoxanthine–guanine phosphoribosyltransferase (HGprt). We generated 6 induced pluripotent stem cell (iPSC) lines from 3 individuals with LND, along with 6 control lines from 3 normal individuals. All 12 lines had the characteristics of pluripotent stem cells, as assessed by immunostaining for pluripotency markers, expression of pluripotency genes, and differentiation into the 3 primary germ cell layers. Gene expression profiling with RNAseq demonstrated significant heterogeneity among the lines. Despite this heterogeneity, several anticipated abnormalities were readily detectable across all LND lines, including reduced HPRT1 mRNA. Several unexpected abnormalities were also consistently detectable across the LND lines, including decreases in FAR2P1 and increases in RNF39. Shotgun proteomics also demonstrated several expected abnormalities in the LND lines, such as absence of HGprt protein. The proteomics study also revealed several unexpected abnormalities across the LND lines, including increases in GNAO1 decreases in NSE4A. There was a good but partial correlation between abnormalities revealed by the RNAseq and proteomics methods. Finally, functional studies demonstrated LND lines had no HGprt enzyme activity and resistance to the toxic pro-drug 6-thioguanine. Intracellular purines in the LND lines were normal, but they did not recycle hypoxanthine. These cells provide a novel resource to reveal insights into the relevance of heterogeneity among iPSC lines and applications for modeling LND.

    Additional information

    supplementary material
  • Tagliapietra, L., Fanari, R., De Candia, C., & Tabossi, P. (2009). Phonotactic regularities in the segmentation of spoken Italian. Quarterly Journal of Experimental Psychology, 62(2), 392 -415. doi:10.1080/17470210801907379.

    Abstract

    Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners' sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittaacute in cittaacuteu.ba) than when they were aligned (e.g., cittaacute in cittaacute.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners' sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.

    Files private

    Request files
  • Tagliapietra, L., Fanari, R., Collina, S., & Tabossi, P. (2009). Syllabic effects in Italian lexical access. Journal of Psycholinguistic Research, 38(6), 511-526. doi:10.1007/s10936-009-9116-4.

    Abstract

    Two cross-modal priming experiments tested whether lexical access is constrained by syllabic structure in Italian. Results extend the available Italian data on the processing of stressed syllables showing that syllabic information restricts the set of candidates to those structurally consistent with the intended word (Experiment 1). Lexical access, however, takes place as soon as possible and it is not delayed till the incoming input corresponds to the first syllable of the word. And, the initial activated set includes candidates whose syllabic structure does not match the intended word (Experiment 2). The present data challenge the early hypothesis that in Romance languages syllables are the units for lexical access during spoken word recognition. The implications of the results for our understanding of the role of syllabic information in language processing are discussed.
  • Takashima, A., Carota, F., Schoots, V., Redmann, A., Jehee, J., & Indefrey, P. (2024). Tomatoes are red: The perception of achromatic objects elicits retrieval of associated color knowledge. Journal of Cognitive Neuroscience, 36(1), 24-45. doi:10.1162/jocn_a_02068.

    Abstract

    When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.
  • Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.

    Abstract

    This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.

    Additional information

    appendix 1-3
  • Tartaro, G., Takashima, A., & McQueen, J. M. (2021). Consolidation as a mechanism for word learning in sequential bilinguals. Bilingualism: Language and Cognition, 24(5), 864-878. doi:10.1017/S1366728921000286.

    Abstract

    First-language research suggests that new words, after initial episodic-memory encoding, are consolidated and hence become lexically integrated. We asked here if lexical consolidation, about word forms and meanings, occurs in a second language. Italian–English sequential bilinguals learned novel English-like words (e.g., apricon, taught to mean “stapler”). fMRI analyses failed to reveal a predicted shift, after consolidation time, from hippocampal to temporal neocortical activity. In a pause-detection task, responses to existing phonological competitors of learned words (e.g., apricot for apricon) were slowed down if the words had been learned two days earlier (i.e., after consolidation time) but not if they had been learned the same day. In a lexical-decision task, new words primed responses to semantically-related existing words (e.g., apricon-paper) whether the words were learned that day or two days earlier. Consolidation appears to support integration of words into the bilingual lexicon, possibly more rapidly for meanings than for forms.

    Additional information

    materials, procedure, results
  • Ten Oever, S., & Martin, A. E. (2021). An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions. eLife, 10: e68066. doi:10.7554/eLife.68066.

    Abstract

    Neuronal oscillations putatively track speech in order to optimize sensory processing. However, it is unclear how isochronous brain oscillations can track pseudo-rhythmic speech input. Here we propose that oscillations can track pseudo-rhythmic speech when considering that speech time is dependent on content-based predictions flowing from internal language models. We show that temporal dynamics of speech are dependent on the predictability of words in a sentence. A computational model including oscillations, feedback, and inhibition is able to track pseudo-rhythmic speech input. As the model processes, it generates temporal phase codes, which are a candidate mechanism for carrying information forward in time. The model is optimally sensitive to the natural temporal speech dynamics and can explain empirical data on temporal speech illusions. Our results suggest that speech tracking does not have to rely only on the acoustics but could also exploit ongoing interactions between oscillations and constraints flowing from internal language models.
  • Ten Oever, S., Sack, A. T., Oehrn, C. R., & Axmacher, N. (2021). An engram of intentionally forgotten information. Nature Communications, 12: 6443. doi:10.1038/s41467-021-26713-x.

    Abstract

    Successful forgetting of unwanted memories is crucial for goal-directed behavior and mental wellbeing. While memory retention strengthens memory traces, it is unclear what happens to memory traces of events that are actively forgotten. Using intracranial EEG recordings from lateral temporal cortex, we find that memory traces for actively forgotten information are partially preserved and exhibit unique neural signatures. Memory traces of successfully remembered items show stronger encoding-retrieval similarity in gamma frequency patterns. By contrast, encoding-retrieval similarity of item-specific memory traces of actively forgotten items depend on activity at alpha/beta frequencies commonly associated with functional inhibition. Additional analyses revealed selective modification of item-specific patterns of connectivity and top-down information flow from dorsolateral prefrontal cortex to lateral temporal cortex in memory traces of intentionally forgotten items. These results suggest that intentional forgetting relies more on inhibitory top-down connections than intentional remembering, resulting in inhibitory memory traces with unique neural signatures and representational formats.

    Additional information

    supplementary figures
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proceedings of the National Academy of Sciences of the United States of America, 121(3): e2320489121. doi:10.1073/pnas.2320489121.

    Abstract

    Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
  • Ter Avest, I. J., & Mulder, K. (2009). The Acquisition of Gender Agreement in the Determiner Phrase by Bilingual Children. Toegepaste Taalwetenschap in Artikelen, 81(1), 133-142.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Terrill, A. (2009). [Review of Felix K. Ameka, Alan Dench, and Nicholas Evans (eds). 2006. Catching language: The standing challenge of grammar writing]. Language Documentation & Conservation, 3(1), 132-137. Retrieved from http://hdl.handle.net/10125/4432.
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Kan, C. C., Tendolkar, I., & Hagoort, P. (2009). Neural correlates of pragmatic language comprehension in autism disorders. Brain, 132, 1941-1952. doi:10.1093/brain/awp103.

    Abstract

    Difficulties with pragmatic aspects of communication are universal across individuals with autism spectrum disorders (ASDs). Here we focused on an aspect of pragmatic language comprehension that is relevant to social interaction in daily life: the integration of speaker characteristics inferred from the voice with the content of a message. Using functional magnetic resonance imaging (fMRI), we examined the neural correlates of the integration of voice-based inferences about the speaker’s age, gender or social background, and sentence content in adults with ASD and matched control participants. Relative to the control group, the ASD group showed increased activation in right inferior frontal gyrus (RIFG; Brodmann area 47) for speakerincongruent sentences compared to speaker-congruent sentences. Given that both groups performed behaviourally at a similar level on a debriefing interview outside the scanner, the increased activation in RIFG for the ASD group was interpreted as being compensatory in nature. It presumably reflects spill-over processing from the language dominant left hemisphere due to higher task demands faced by the participants with ASD when integrating speaker characteristics and the content of a spoken sentence. Furthermore, only the control group showed decreased activation for speaker-incongruent relative to speaker-congruent sentences in right ventral medial prefrontal cortex (vMPFC; Brodmann area 10), including right anterior cingulate cortex (ACC; Brodmann area 24/32). Since vMPFC is involved in self-referential processing related to judgments and inferences about self and others, the absence of such a modulation in vMPFC activation in the ASD group possibly points to atypical default self-referential mental activity in ASD. Our results show that in ASD compensatory mechanisms are necessary in implicit, low-level inferential processes in spoken language understanding. This indicates that pragmatic language problems in ASD are not restricted to high-level inferential processes, but encompass the most basic aspects of pragmatic language processing.
  • Tesink, C. M. J. Y., Petersson, K. M., Van Berkum, J. J. A., Van den Brink, D., Buitelaar, J. K., & Hagoort, P. (2009). Unification of speaker and meaning in language comprehension: An fMRI study. Journal of Cognitive Neuroscience, 21, 2085-2099. doi:10.1162/jocn.2008.21161.

    Abstract

    When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the unification of sentence meaning and voice-based inferences about the speaker's age, sex, or social background. We found enhanced activation in the inferior frontal gyrus bilaterally (BA 45/47) during listening to sentences whose meaning was incongruent with inferred speaker characteristics. Furthermore, our results showed an overlap in brain regions involved in unification of speaker-related information and those used for the unification of semantic and world knowledge information [inferior frontal gyrus bilaterally (BA 45/47) and left middle temporal gyrus (BA 21)]. These findings provide evidence for a shared neural unification system for linguistic and extralinguistic sources of information and extend the existing knowledge about the role of inferior frontal cortex as a crucial component for unification during language comprehension.
  • Theakston, A., & Rowland, C. F. (2009). Introduction to Special Issue: Cognitive approaches to language acquisition. Cognitive Linguistics, 20(3), 477-480. doi:10.1515/COGL.2009.021.
  • Theakston, A. L., & Rowland, C. F. (2009). The acquisition of auxiliary syntax: A longitudinal elicitation study. Part 1: Auxiliary BE. Journal of Speech, Language, and Hearing Research, 52, 1449-1470. doi:10.1044/1092-4388(2009/08-0037).

    Abstract

    Purpose: The question of how and when English-speaking children acquire auxiliaries is the subject of extensive debate. Some researchers posit the existence of innately given Universal Grammar principles to guide acquisition, although some aspects of the auxiliary system must be learned from the input. Others suggest that auxiliaries can be learned without Universal Grammar, citing evidence of piecemeal learning in their support. This study represents a unique attempt to trace the development of auxiliary syntax by using a longitudinal elicitation methodology. Method: Twelve English-speaking children participated in 3 tasks designed to elicit auxiliary BE in declaratives and yes/no and wh-questions. They completed each task 6 times in total between the ages of 2;10 (years;months) and 3;6. Results: The children’s levels of correct use of 2 forms of BE (is,are) differed according to auxiliary form and sentence structure, and these relations changed over development. An analysis of the children’s errors also revealed complex interactions between these factors. Conclusion: These data are problematic for existing accounts of auxiliary acquisition and highlight the need for researchers working within both generativist and constructivist frameworks to develop more detailed theories of acquisition that directly predict the pattern of acquisition observed.
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Tilmatine, M., Hubers, F., & Hintz, F. (2021). Exploring individual differences in recognizing idiomatic expressions in context. Journal of Cognition, 4(1): 37. doi:10.5334/joc.183.

    Abstract

    Written language comprehension requires readers to integrate incoming information with stored mental knowledge to construct meaning. Literally plausible idiomatic expressions can activate both figurative and literal interpretations, which convey different meanings. Previous research has shown that contexts biasing the figurative or literal interpretation of an idiom can facilitate its processing. Moreover, there is evidence that processing of idiomatic expressions is subject to individual differences in linguistic knowledge and cognitive-linguistic skills. It is therefore conceivable that individuals vary in the extent to which they experience context-induced facilitation in processing idiomatic expressions. To explore the interplay between reader-related variables and contextual facilitation, we conducted a self-paced reading experiment. We recruited participants who had recently completed a battery of 33 behavioural tests measuring individual differences in linguistic knowledge, general cognitive skills and linguistic processing skills. In the present experiment, a subset of these participants read idiomatic expressions that were either presented in isolation or preceded by a figuratively or literally biasing context. We conducted analyses on the reading times of idiom-final nouns and the word thereafter (spill-over region) across the three conditions, including participants’ scores from the individual differences battery. Our results showed no main effect of the preceding context, but substantial variation in contextual facilitation between readers. We observed main effects of participants’ word reading ability and non-verbal intelligence on reading times as well as an interaction between condition and linguistic knowledge. We encourage interested researchers to exploit the present dataset for follow-up studies on individual differences in idiom processing.
  • Tilot, A. K., Khramtsova, E. A., Liang, D., Grasby, K. L., Jahanshad, N., Painter, J., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Liu, S., Brotman, S. M., Thompson, P. M., Medland, S. E., Macciardi, F., Stranger, B. E., Davis, L. K., Fisher, S. E., & Stein, J. L. (2021). The evolutionary history of common genetic variants influencing human cortical surface area. Cerebral Cortex, 31(4), 1873-1887. doi:10.1093/cercor/bhaa327.

    Abstract

    Structural brain changes along the lineage leading to modern Homo sapiens contributed to our distinctive cognitive and social abilities. However, the evolutionarily relevant molecular variants impacting key aspects of neuroanatomy are largely unknown. Here, we integrate evolutionary annotations of the genome at diverse timescales with common variant associations from large-scale neuroimaging genetic screens. We find that alleles with evidence of recent positive polygenic selection over the past 2000–3000 years are associated with increased surface area (SA) of the entire cortex, as well as specific regions, including those involved in spoken language and visual processing. Therefore, polygenic selective pressures impact the structure of specific cortical areas even over relatively recent timescales. Moreover, common sequence variation within human gained enhancers active in the prenatal cortex is associated with postnatal global SA. We show that such variation modulates the function of a regulatory element of the developmentally relevant transcription factor HEY2 in human neural progenitor cells and is associated with structural changes in the inferior frontal cortex. These results indicate that non-coding genomic regions active during prenatal cortical development are involved in the evolution of human brain structure and identify novel regulatory elements and genes impacting modern human brain structure.
  • Timpson, N. J., Tobias, J. H., Richards, J. B., Soranzo, N., Duncan, E. L., Sims, A.-M., Whittaker, P., Kumanduri, V., Zhai, G., Glaser, B., Eisman, J., Jones, G., Nicholson, G., Prince, R., Seeman, E., Spector, T. D., Brown, M. A., Peltonen, L., Smith, G. D., Deloukas, P. and 1 moreTimpson, N. J., Tobias, J. H., Richards, J. B., Soranzo, N., Duncan, E. L., Sims, A.-M., Whittaker, P., Kumanduri, V., Zhai, G., Glaser, B., Eisman, J., Jones, G., Nicholson, G., Prince, R., Seeman, E., Spector, T. D., Brown, M. A., Peltonen, L., Smith, G. D., Deloukas, P., & Evans, D. M. (2009). Common variants in the region around Osterix are associated with bone mineral density and growth in childhood. Human Molecular Genetics, 18(8), 1510-1517. doi:10.1093/hmg/ddp052.

    Abstract

    Peak bone mass achieved in adolescence is a determinant of bone mass in later life. In order to identify genetic variants affecting bone mineral density (BMD), we performed a genome-wide association study of BMD and related traits in 1518 children from the Avon Longitudinal Study of Parents and Children (ALSPAC). We compared results with a scan of 134 adults with high or low hip BMD. We identified associations with BMD in an area of chromosome 12 containing the Osterix (SP7) locus, a transcription factor responsible for regulating osteoblast differentiation (ALSPAC: P = 5.8 x 10(-4); Australia: P = 3.7 x 10(-4)). This region has previously shown evidence of association with adult hip and lumbar spine BMD in an Icelandic population, as well as nominal association in a UK population. A meta-analysis of these existing studies revealed strong association between SNPs in the Osterix region and adult lumbar spine BMD (P = 9.9 x 10(-11)). In light of these findings, we genotyped a further 3692 individuals from ALSPAC who had whole body BMD and confirmed the association in children as well (P = 5.4 x 10(-5)). Moreover, all SNPs were related to height in ALSPAC children, but not weight or body mass index, and when height was included as a covariate in the regression equation, the association with total body BMD was attenuated. We conclude that genetic variants in the region of Osterix are associated with BMD in children and adults probably through primary effects on growth.
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Todorova, L. (2021). Language bias in visually driven decisions: Computational neurophysiological mechanisms. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Torreira, F., & Ernestus, M. (2009). Probabilistic effects on French [t] duration. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 448-451). Causal Productions Pty Ltd.

    Abstract

    The present study shows that [t] consonants are affected by probabilistic factors in a syllable-timed language as French, and in spontaneous as well as in journalistic speech. Study 1 showed a word bigram frequency effect in spontaneous French, but its exact nature depended on the corpus on which the probabilistic measures were based. Study 2 investigated journalistic speech and showed an effect of the joint frequency of the test word and its following word. We discuss the possibility that these probabilistic effects are due to the speaker’s planning of upcoming words, and to the speaker’s adaptation to the listener’s needs.
  • Torres Borda, L., Jadoul, Y., Rasilo, H., Salazar-Casals, A., & Ravignani, A. (2021). Vocal plasticity in harbour seal pups. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376(1840): 20200456. doi:10.1098/rstb.2020.0456.

    Abstract

    Vocal plasticity can occur in response to environmental and biological factors, including conspecifics' vocalizations and noise. Pinnipeds are one of the few mammalian groups capable of vocal learning, and are therefore relevant to understanding the evolution of vocal plasticity in humans and other animals. Here, we investigate the vocal plasticity of harbour seals (Phoca vitulina), a species with vocal learning abilities observed in adulthood but not puppyhood. To evaluate early mammalian vocal development, we tested 1–3 weeks-old seal pups. We tailored noise playbacks to this species and age to induce seal pups to shift their fundamental frequency (f0), rather than adapt call amplitude or temporal characteristics. We exposed individual pups to low- and high-intensity bandpass-filtered noise, which spanned—and masked—their typical range of f0; simultaneously, we recorded pups' spontaneous calls. Unlike most mammals, pups modified their vocalizations by lowering their f0 in response to increased noise. This modulation was precise and adapted to the particular experimental manipulation of the noise condition. In addition, higher levels of noise induced less dispersion around the mean f0, suggesting that pups may have actively focused their phonatory efforts to target lower frequencies. Noise did not seem to affect call amplitude. However, one seal showed two characteristics of the Lombard effect known for human speech in noise: significant increase in call amplitude and flattening of spectral tilt. Our relatively low noise levels may have favoured f0 modulation while inhibiting amplitude adjustments. This lowering of f0 is unusual, as most animals commonly display no such f0 shift. Our data represent a relatively rare case in mammalian neonates, and have implications for the evolution of vocal plasticity and vocal learning across species, including humans.

    Additional information

    supplement
  • Tourtouri, E. N., Delogu, F., & Crocker, M. W. (2021). Rational Redundancy in Referring Expressions: Evidence from Event-related Potentials. Cognitive Science, 45(12): e13071. doi:10.1111/cogs.13071.

    Abstract

    In referential communication, Grice's Maxim of Quantity is thought to imply that utterances conveying unnecessary information should incur comprehension difficulties. There is, however, considerable evidence that speakers frequently encode redundant information in their referring expressions, raising the question as to whether such overspecifications hinder listeners' processing. Evidence from previous work is inconclusive, and mostly comes from offline studies. In this article, we present two event-related potential (ERP) experiments, investigating the real-time comprehension of referring expressions that contain redundant adjectives in complex visual contexts. Our findings provide support for both Gricean and bounded-rational accounts. We argue that these seemingly incompatible results can be reconciled if common ground is taken into account. We propose a bounded-rational account of overspecification, according to which even redundant words can be beneficial to comprehension to the extent that they facilitate the reduction of listeners' uncertainty regarding the target referent.
  • Trilsbeek, P., & Van Uytvanck, D. (2009). Regional archives and community portals. IASA Journal, 32, 69-73.
  • Trompenaars, T., Kaluge, T. A., Sarabi, R., & De Swart, P. (2021). Cognitive animacy and its relation to linguistic animacy: Evidence from Japanese and Persian. Language Sciences, 86: 101399. doi:10.1016/j.langsci.2021.101399.

    Abstract

    Animacy, commonly defined as the distinction between living and non-living entities, is a useful notion in cognitive science and linguistics employed to describe and predict variation in psychological and linguistic behaviour. In the (psycho)linguistics literature we find linguistic animacy dichotomies which are (implicitly) assumed to correspond to biological dichotomies. We argue this is problematic, as it leaves us without a cognitively grounded, universal description for non-prototypical cases. We show that ‘animacy’ in language can be better understood as universally emerging from a gradual, cognitive property by collecting animacy ratings for a great range of nouns from Japanese and Persian. We used these cognitive ratings in turn to predict linguistic variation in these languages traditionally explained through dichotomous distinctions. We show that whilst (speakers of) languages may subtly differ in their conceptualisation of animacy, universality may be found in the process of mapping conceptual animacy to linguistic variation.
  • Trompenaars, T. (2021). Bringing stories to life: Animacy in narrative and processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Trujillo, J. P., & Holler, J. (2021). The kinematics of social action: Visual signals provide cues for what interlocutors do in conversation. Brain Sciences, 11: 996. doi:10.3390/brainsci11080996.

    Abstract

    During natural conversation, people must quickly understand the meaning of what the other speaker is saying. This concerns not just the semantic content of an utterance, but also the social action (i.e., what the utterance is doing—requesting information, offering, evaluating, checking mutual understanding, etc.) that the utterance is performing. The multimodal nature of human language raises the question of whether visual signals may contribute to the rapid processing of such social actions. However, while previous research has shown that how we move reveals the intentions underlying instrumental actions, we do not know whether the intentions underlying fine-grained social actions in conversation are also revealed in our bodily movements. Using a corpus of dyadic conversations combined with manual annotation and motion tracking, we analyzed the kinematics of the torso, head, and hands during the asking of questions. Manual annotation categorized these questions into six more fine-grained social action types (i.e., request for information, other-initiated repair, understanding check, stance or sentiment, self-directed, active participation). We demonstrate, for the first time, that the kinematics of the torso, head and hands differ between some of these different social action categories based on a 900 ms time window that captures movements starting slightly prior to or within 600 ms after utterance onset. These results provide novel insights into the extent to which our intentions shape the way that we move, and provide new avenues for understanding how this phenomenon may facilitate the fast communication of meaning in conversational interaction, social action, and conversation

    Additional information

    analyses scripts
  • Trujillo, J. P., Ozyurek, A., Holler, J., & Drijvers, L. (2021). Speakers exhibit a multimodal Lombard effect in noise. Scientific Reports, 11: 16721. doi:10.1038/s41598-021-95791-0.

    Abstract

    In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.

    Additional information

    supplementary material
  • Trujillo, J. P., Ozyurek, A., Kan, C. C., Sheftel-Simanova, I., & Bekkering, H. (2021). Differences in the production and perception of communicative kinematics in autism. Autism Research, 14(12), 2640-2653. doi:10.1002/aur.2611.

    Abstract

    In human communication, social intentions and meaning are often revealed in the way we move. In this study, we investigate the flexibility of human communication in terms of kinematic modulation in a clinical population, namely, autistic individuals. The aim of this study was twofold: to assess (a) whether communicatively relevant kinematic features of gestures differ between autistic and neurotypical individuals, and (b) if autistic individuals use communicative kinematic modulation to support gesture recognition. We tested autistic and neurotypical individuals on a silent gesture production task and a gesture comprehension task. We measured movement during the gesture production task using a Kinect motion tracking device in order to determine if autistic individuals differed from neurotypical individuals in their gesture kinematics. For the gesture comprehension task, we assessed whether autistic individuals used communicatively relevant kinematic cues to support recognition. This was done by using stick-light figures as stimuli and testing for a correlation between the kinematics of these videos and recognition performance. We found that (a) silent gestures produced by autistic and neurotypical individuals differ in communicatively relevant kinematic features, such as the number of meaningful holds between movements, and (b) while autistic individuals are overall unimpaired at recognizing gestures, they processed repetition and complexity, measured as the amount of submovements perceived, differently than neurotypicals do. These findings highlight how subtle aspects of neurotypical behavior can be experienced differently by autistic individuals. They further demonstrate the relationship between movement kinematics and social interaction in high-functioning autistic individuals.

    Additional information

    supporting information
  • Trujillo, J. P., Levinson, S. C., & Holler, J. (2021). Visual information in computer-mediated interaction matters: Investigating the association between the availability of gesture and turn transition timing in conversation. In M. Kurosu (Ed.), Human-Computer Interaction. Design and User Experience Case Studies. HCII 2021 (pp. 643-657). Cham: Springer. doi:10.1007/978-3-030-78468-3_44.

    Abstract

    Natural human interaction involves the fast-paced exchange of speaker turns. Crucially, if a next speaker waited with planning their turn until the current speaker was finished, language production models would predict much longer turn transition times than what we observe. Next speakers must therefore prepare their turn in parallel to listening. Visual signals likely play a role in this process, for example by helping the next speaker to process the ongoing utterance and thus prepare an appropriately-timed response.

    To understand how visual signals contribute to the timing of turn-taking, and to move beyond the mostly qualitative studies of gesture in conversation, we examined unconstrained, computer-mediated conversations between 20 pairs of participants while systematically manipulating speaker visibility. Using motion tracking and manual gesture annotation, we assessed 1) how visibility affected the timing of turn transitions, and 2) whether use of co-speech gestures and 3) the communicative kinematic features of these gestures were associated with changes in turn transition timing.

    We found that 1) decreased visibility was associated with less tightly timed turn transitions, and 2) the presence of gestures was associated with more tightly timed turn transitions across visibility conditions. Finally, 3) structural and salient kinematics contributed to gesture’s facilitatory effect on turn transition times.

    Our findings suggest that speaker visibility--and especially the presence and kinematic form of gestures--during conversation contributes to the temporal coordination of conversational turns in computer-mediated settings. Furthermore, our study demonstrates that it is possible to use naturalistic conversation and still obtain controlled results.
  • Trujillo, J. P. (2024). Motion-tracking technology for the study of gesture. In A. Cienki (Ed.), The Cambridge Handbook of Gesture Studies. Cambridge: Cambridge University Press.
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review, 31, 1723-1734. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Tsoukala, C., Frank, S. L., Van Den Bosch, A., Valdés Kroff, J., & Broersma, M. (2021). Modeling the auxiliary phrase asymmetry in code-switched Spanish–English. Bilingualism: Language and Cognition, 24(2), 271-280. doi:10.1017/S1366728920000449.

    Abstract

    Spanish–English bilinguals rarely code-switch in the perfect structure between the Spanish auxiliary haber (“to have”) and the participle (e.g., “Ella ha voted”; “She has voted”). However, they are somewhat likely to switch in the progressive structure between the Spanish auxiliary estar (“to be”) and the participle (“Ella está voting”; “She is voting”). This phenomenon is known as the “auxiliary phrase asymmetry”. One hypothesis as to why this occurs is that estar has more semantic weight as it also functions as an independent verb, whereas haber is almost exclusively used as an auxiliary verb. To test this hypothesis, we employed a connectionist model that produces spontaneous code-switches. Through simulation experiments, we showed that i) the asymmetry emerges in the model and that ii) the asymmetry disappears when using haber also as a main verb, which adds semantic weight. Therefore, the lack of semantic weight of haber may indeed cause the asymmetry.
  • Tsoukala, C., Broersma, M., Van den Bosch, A., & Frank, S. L. (2021). Simulating code-switching using a neural network model of bilingual sentence production. Computational Brain & Behavior, 4, 87-100. doi:10.1007/s42113-020-00088-6.

    Abstract

    Code-switching is the alternation from one language to the other during bilingual speech. We present a novel method of researching this phenomenon using computational cognitive modeling. We trained a neural network of bilingual sentence production to simulate early balanced Spanish–English bilinguals, late speakers of English who have Spanish as a dominant native language, and late speakers of Spanish who have English as a dominant native language. The model produced code-switches even though it was not exposed to code-switched input. The simulations predicted how code-switching patterns differ between early balanced and late non-balanced bilinguals; the balanced bilingual simulation code-switches considerably more frequently, which is in line with what has been observed in human speech production. Additionally, we compared the patterns produced by the simulations with two corpora of spontaneous bilingual speech and identified noticeable commonalities and differences. To our knowledge, this is the first computational cognitive model simulating the code-switched production of non-balanced bilinguals and comparing the simulated production of balanced and non-balanced bilinguals with that of human bilinguals.

    Additional information

    dual-path model
  • Tsoukala, C. (2021). Bilingual sentence production and code-switching: Neural network simulations. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Tyler, M., & Cutler, A. (2009). Cross-language differences in cue use for speech segmentation. Journal of the Acoustical Society of America, 126, 367-376. doi:10.1121/1.3129127.

    Abstract

    Two artificial-language learning experiments directly compared English, French, and Dutch listeners’ use of suprasegmental cues for continuous-speech segmentation. In both experiments, listeners heard unbroken sequences of consonant-vowel syllables, composed of recurring three- and four-syllable “words.” These words were demarcated by(a) no cue other than transitional probabilities induced by their recurrence, (b) a consistent left-edge cue, or (c) a consistent right-edge cue. Experiment 1 examined a vowel lengthening cue. All three listener groups benefited from this cue in right-edge position; none benefited from it in left-edge position. Experiment 2 examined a pitch-movement cue. English listeners used this cue in left-edge position, French listeners used it in right-edge position, and Dutch listeners used it in both positions. These findings are interpreted as evidence of both language-universal and language-specific effects. Final lengthening is a language-universal effect expressing a more general (non-linguistic) mechanism. Pitch movement expresses prominence which has characteristically different placements across languages: typically at right edges in French, but at left edges in English and Dutch. Finally, stress realization in English versus Dutch encourages greater attention to suprasegmental variation by Dutch than by English listeners, allowing Dutch listeners to benefit from an informative pitch-movement cue even in an uncharacteristic position.
  • Uddén, J., Araújo, S., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2009). A matter of time: Implicit acquisition of recursive sequence structures. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 2444-2449).

    Abstract

    A dominant hypothesis in empirical research on the evolution of language is the following: the fundamental difference between animal and human communication systems is captured by the distinction between regular and more complex non-regular grammars. Studies reporting successful artificial grammar learning of nested recursive structures and imaging studies of the same have methodological shortcomings since they typically allow explicit problem solving strategies and this has been shown to account for the learning effect in subsequent behavioral studies. The present study overcomes these shortcomings by using subtle violations of agreement structure in a preference classification task. In contrast to the studies conducted so far, we use an implicit learning paradigm, allowing the time needed for both abstraction processes and consolidation to take place. Our results demonstrate robust implicit learning of recursively embedded structures (context-free grammar) and recursive structures with cross-dependencies (context-sensitive grammar) in an artificial grammar learning task spanning 9 days. Keywords: Implicit artificial grammar learning; centre embedded; cross-dependency; implicit learning; context-sensitive grammar; context-free grammar; regular grammar; non-regular grammar
  • Ullman, M. T., Bulut, T., & Walenski, M. (2024). Hijacking limitations of working memory load to test for composition in language. Cognition, 251: 105875. doi:10.1016/j.cognition.2024.105875.

    Abstract

    Although language depends on storage and composition, just what is stored or (de)composed remains unclear. We leveraged working memory load limitations to test for composition, hypothesizing that decomposed forms should particularly tax working memory. We focused on a well-studied paradigm, English inflectional morphology. We predicted that (compositional) regulars should be harder to maintain in working memory than (non-compositional) irregulars, using a 3-back production task. Frequency, phonology, orthography, and other potentially confounding factors were controlled for. Compared to irregulars, regulars and their accompanying −s/−ing-affixed filler items yielded more errors. Underscoring the decomposition of only regulars, regulars yielded more bare-stem (e.g., walk) and stem affixation errors (walks/walking) than irregulars, whereas irregulars yielded more past-tense-form affixation errors (broughts/tolded). In line with previous evidence that regulars can be stored under certain conditions, the regular-irregular difference held specifically for phonologically consistent (not inconsistent) regulars, in particular for both low and high frequency consistent regulars in males, but only for low frequency consistent regulars in females. Sensitivity analyses suggested the findings were robust. The study further elucidates the computation of inflected forms, and introduces a simple diagnostic for linguistic composition.

    Additional information

    Data availabillity
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 432-436).

    Abstract

    The human brain deals with the infinite variability of speech through multiple mechanisms. Some of them rely solely on information in the speech input (i.e., signal-driven) whereas some rely on linguistic or real-world knowledge (i.e., knowledge-driven). Many signal-driven perceptual processes rely on the enhancement of acoustic differences between incoming speech sounds, producing contrastive adjustments. For instance, when an ambiguous voiceless fricative is preceded by a high fundamental frequency (f0) sentence, the fricative is perceived as having lower a spectral center of gravity (CoG). However, it is not clear whether knowledge of a talker’s typical f0 can lead to similar contrastive effects. This study investigated a possible talker f0 effect on fricative CoG perception. In the exposure phase, two groups of participants (N=16 each) heard the same talker at high or low f0 for 20 minutes. Later, in the test phase, participants rated fixed-f0 /?ɔk/ tokens as being /sɔk/ (i.e., high CoG) or /ʃɔk/ (i.e., low CoG), where /?/ represents a fricative from a 5-step /s/-/ʃ/ continuum. Surprisingly, the data revealed the opposite of our contrastive hypothesis, whereby hearing high f0 instead biased perception towards high CoG. Thus, we demonstrated that talker f0 information affects fricative CoG perception.
  • Vágvölgyi, R., Bergström, K., Bulajić, A., Klatte, M., Fernandes, T., Grosche, M., Huettig, F., Rüsseler, J., & Lachmann, T. (2021). Functional illiteracy and developmental dyslexia: Looking for common roots. A systematic review. Journal of Cultural Cognitive Science, 5, 159-179. doi:10.1007/s41809-021-00074-9.

    Abstract

    A considerable amount of the population in more economically developed countries are functionally illiterate (i.e., low literate). Despite some years of schooling and basic reading skills, these individuals cannot properly read and write and, as a consequence have problems to understand even short texts. An often-discussed approach (Greenberg et al., 1997) assumes weak phonological processing skills coupled with untreated developmental dyslexia as possible causes of functional illiteracy. Although there is some data suggesting commonalities between low literacy and developmental dyslexia, it is still not clear, whether these reflect shared consequences (i.e., cognitive and behavioral profile) or shared causes. The present systematic review aims at exploring the similarities and differences identified in empirical studies investigating both functional illiterate and developmental dyslexic samples. Nine electronic databases were searched in order to identify all quantitative studies published in English or German. Although a broad search strategy and few limitations were applied, only 5 studies have been identified adequate from the resulting 9269 references. The results point to the lack of studies directly comparing functional illiterate with developmental dyslexic samples. Moreover, a huge variance has been identified between the studies in how they approached the concept of functional illiteracy, particularly when it came to critical categories such the applied definition, terminology, criteria for inclusion in the sample, research focus, and outcome measures. The available data highlight the need for more direct comparisons in order to understand what extent functional illiteracy and dyslexia share common characteristics.

    Additional information

    supplementary materials
  • Vainio, M., Suni, A., Raitio, T., Nurminen, J., Järvikivi, J., & Alku, P. (2009). New method for delexicalization and its application to prosodic tagging for text-to-speech synthesis. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1703-1706).

    Abstract

    This paper describes a new flexible delexicalization method based on glottal excited parametric speech synthesis scheme. The system utilizes inverse filtered glottal flow and all-pole modelling of the vocal tract. The method provides a possibility to retain and manipulate all relevant prosodic features of any kind of speech. Most importantly, the features include voice quality, which has not been properly modeled in earlier delexicalization methods. The functionality of the new method was tested in a prosodic tagging experiment aimed at providing word prominence data for a text-to-speech synthesis system. The experiment confirmed the usefulness of the method and further corroborated earlier evidence that linguistic factors influence the perception of prosodic prominence.
  • Van Berkum, J. J. A., Holleman, B., Nieuwland, M. S., Otten, M., & Murre, J. (2009). Right or wrong? The brain's fast response to morally objectionable statements. Psychological Science, 20, 1092 -1099. doi:10.1111/j.1467-9280.2009.02411.x.

    Abstract

    How does the brain respond to statements that clash with a person's value system? We recorded event-related brain potentials while respondents from contrasting political-ethical backgrounds completed an attitude survey on drugs, medical ethics, social conduct, and other issues. Our results show that value-based disagreement is unlocked by language extremely rapidly, within 200 to 250 ms after the first word that indicates a clash with the reader's value system (e.g., "I think euthanasia is an acceptable/unacceptable…"). Furthermore, strong disagreement rapidly influences the ongoing analysis of meaning, which indicates that even very early processes in language comprehension are sensitive to a person's value system. Our results testify to rapid reciprocal links between neural systems for language and for valuation.

    Additional information

    Critical survey statements (in Dutch)
  • Van Berkum, J. J. A. (2009). The neuropragmatics of 'simple' utterance comprehension: An ERP review. In U. Sauerland, & K. Yatsushiro (Eds.), Semantics and pragmatics: From experiment to theory (pp. 276-316). Basingstoke: Palgrave Macmillan.

    Abstract

    In this chapter, I review my EEG research on comprehending sentences in context from a pragmatics-oriented perspective. The review is organized around four questions: (1) When and how do extra-sentential factors such as the prior text, identity of the speaker, or value system of the comprehender affect the incremental sentence interpretation processes indexed by the so-called N400 component of the ERP? (2) When and how do people identify the referents for expressions such as “he” or “the review”, and how do referential processes interact with sense and syntax? (3) How directly pragmatic are the interpretation-relevant ERP effects reported here? (4) Do readers and listeners anticipate upcoming information? One important claim developed in the chapter is that the well-known N400 component, although often associated with ‘semantic integration’, only indirectly reflects the sense-making involved in structure-sensitive dynamic composition of the type studied in semantics and pragmatics. According to the multiple-cause intensified retrieval (MIR) account -- essentially an extension of the memory retrieval account proposed by Kutas and colleagues -- the amplitude of the word-elicited N400 reflects the computational resources used in retrieving the relatively invariant coded meaning stored in semantic long-term memory for, and made available by, the word at hand. Such retrieval becomes more resource-intensive when the coded meanings cued by this word do not match with expectations raised by the relevant interpretive context, but also when certain other relevance signals, such as strong affective connotation or a marked delivery, indicate the need for deeper processing. The most important consequence of this account is that pragmatic modulations of the N400 come about not because the N400 at hand directly reflects a rich compositional-semantic and/or Gricean analysis to make sense of the word’s coded meaning in this particular context, but simply because the semantic and pragmatic implications of the preceding words have already been computed, and now define a less or more helpful interpretive background within which to retrieve coded meaning for the critical word.
  • Van Bergen, G., & Hogeweg, L. (2021). Managing interpersonal discourse expectations: a comparative analysis of contrastive discourse particles in Dutch. Linguistics, 59(2), 333-360. doi:10.1515/ling-2021-0020.

    Abstract

    In this article we investigate how speakers manage discourse expectations in dialogue by comparing the meaning and use of three Dutch discourse particles, i.e. wel, toch and eigenlijk, which all express a contrast between their host utterance and a discourse-based expectation. The core meanings of toch, wel and eigenlijk are formally distinguished on the basis of two intersubjective parameters: (i) whether the particle marks alignment or misalignment between speaker and addressee discourse beliefs, and (ii) whether the particle requires an assessment of the addressee’s representation of mutual discourse beliefs. By means of a quantitative corpus study, we investigate to what extent the intersubjective meaning distinctions between wel, toch and eigenlijk are reflected in statistical usage patterns across different social situations. Results suggest that wel, toch and eigenlijk are lexicalizations of distinct generalized politeness strategies when expressing contrast in social interaction. Our findings call for an interdisciplinary approach to discourse particles in order to enhance our understanding of their functions in language.
  • Van Heukelum, S., Tulva, K., Geers, F. E., van Dulm, S., Ruisch, I. H., Mill, J., Viana, J. F., Beckmann, C. F., Buitelaar, J. K., Poelmans, G., Glennon, J. C., Vogt, B. A., Havenith, M. N., & França, A. S. (2021). A central role for anterior cingulate cortex in the control of pathological aggression. Current Biology, 31, 2321-2333.e5. doi:10.1016/j.cub.2021.03.062.

    Abstract

    Controlling aggression is a crucial skill in social species like rodents and humans and has been associated with anterior cingulate cortex (ACC). Here, we directly link the failed regulation of aggression in BALB/cJ mice to ACC hypofunction. We first show that ACC in BALB/cJ mice is structurally degraded: neuron density is decreased, with pervasive neuron death and reactive astroglia. Gene-set enrichment analysis suggested that this process is driven by neuronal degeneration, which then triggers toxic astrogliosis. cFos expression across ACC indicated functional consequences: during aggressive encounters, ACC was engaged in control mice, but not BALB/cJ mice. Chemogenetically activating ACC during aggressive encounters drastically suppressed pathological aggression but left species-typical aggression intact. The network effects of our chemogenetic perturbation suggest that this behavioral rescue is mediated by suppression of amygdala and hypothalamus and activation of mediodorsal thalamus. Together, these findings highlight the central role of ACC in curbing pathological aggression.
  • Ip, H. F., Van der Laan, C. M., Krapohl, E. M. L., Brikell, I., Sánchez-Mora, C., Nolte, I. M., St Pourcain, B., Bolhuis, K., Palviainen, T., Zafarmand, H., Colodro-Conde, L., Gordon, S., Zayats, T., Aliev, F., Jiang, C., Wang, C. A., Saunders, G., Karhunen, V., Hammerschlag, A. R., Adkins, D. E. and 129 moreIp, H. F., Van der Laan, C. M., Krapohl, E. M. L., Brikell, I., Sánchez-Mora, C., Nolte, I. M., St Pourcain, B., Bolhuis, K., Palviainen, T., Zafarmand, H., Colodro-Conde, L., Gordon, S., Zayats, T., Aliev, F., Jiang, C., Wang, C. A., Saunders, G., Karhunen, V., Hammerschlag, A. R., Adkins, D. E., Border, R., Peterson, R. E., Prinz, J. A., Thiering, E., Seppälä, I., Vilor-Tejedor, N., Ahluwalia, T. S., Day, F. R., Hottenga, J.-J., Allegrini, A. G., Rimfeld, K., Chen, Q., Lu, Y., Martin, J., Soler Artigas, M., Rovira, P., Bosch, R., Español, G., Ramos Quiroga, J. A., Neumann, A., Ensink, J., Grasby, K., Morosoli, J. J., Tong, X., Marrington, S., Middeldorp, C., Scott, J. G., Vinkhuyzen, A., Shabalin, A. A., Corley, R., Evans, L. M., Sugden, K., Alemany, S., Sass, L., Vinding, R., Ruth, K., Tyrrell, J., Davies, G. E., Ehli, E. A., Hagenbeek, F. A., De Zeeuw, E., Van Beijsterveldt, T. C., Larsson, H., Snieder, H., Verhulst, F. C., Amin, N., Whipp, A. M., Korhonen, T., Vuoksimaa, E., Rose, R. J., Uitterlinden, A. G., Heath, A. C., Madden, P., Haavik, J., Harris, J. R., Helgeland, Ø., Johansson, S., Knudsen, G. P. S., Njolstad, P. R., Lu, Q., Rodriguez, A., Henders, A. K., Mamun, A., Najman, J. M., Brown, S., Hopfer, C., Krauter, K., Reynolds, C., Smolen, A., Stallings, M., Wadsworth, S., Wall, T. L., Silberg, J. L., Miller, A., Keltikangas-Järvinen, L., Hakulinen, C., Pulkki-Råback, L., Havdahl, A., Magnus, P., Raitakari, O. T., Perry, J. R. B., Llop, S., Lopez-Espinosa, M.-J., Bønnelykke, K., Bisgaard, H., Sunyer, J., Lehtimäki, T., Arseneault, L., Standl, M., Heinrich, J., Boden, J., Pearson, J., Horwood, L. J., Kennedy, M., Poulton, R., Eaves, L. J., Maes, H. H., Hewitt, J., Copeland, W. E., Costello, E. J., Williams, G. M., Wray, N., Järvelin, M.-R., McGue, M., Iacono, W., Caspi, A., Moffitt, T. E., Whitehouse, A., Pennell, C. E., Klump, K. L., Burt, S. A., Dick, D. M., Reichborn-Kjennerud, T., Martin, N. G., Medland, S. E., Vrijkotte, T., Kaprio, J., Tiemeier, H., Davey Smith, G., Hartman, C. A., Oldehinkel, A. J., Casas, M., Ribasés, M., Lichtenstein, P., Lundström, S., Plomin, R., Bartels, M., Nivard, M. G., & Boomsma, D. I. (2021). Genetic association study of childhood aggression across raters, instruments, and age. Translational Psychiatry, 11: 413. doi:10.1038/s41398-021-01480-x.
  • Van Dijk, C. N. (2021). Cross-linguistic influence during real-time sentence processing in bilingual children and adults. PhD Thesis, Raboud University Nijmegen, Nijmegen.
  • van der Burght, C. L., Friederici, A. D., Goucha, T., & Hartwigsen, G. (2021). Pitch accents create dissociable syntactic and semantic expectations during sentence processing. Cognition, 212: 104702. doi:10.1016/j.cognition.2021.104702.

    Abstract

    The language system uses syntactic, semantic, as well as prosodic cues to efficiently guide auditory sentence comprehension. Prosodic cues, such as pitch accents, can build expectations about upcoming sentence elements. This study investigates to what extent syntactic and semantic expectations generated by pitch accents can be dissociated and if so, which cues take precedence when contradictory information is present. We used sentences in which one out of two nominal constituents was placed in contrastive focus with a third one. All noun phrases carried overt syntactic information (case-marking of the determiner) and semantic information (typicality of the thematic role of the noun). Two experiments (a sentence comprehension and a sentence completion task) show that focus, marked by pitch accents, established expectations in both syntactic and semantic domains. However, only the syntactic expectations, when violated, were strong enough to interfere with sentence comprehension. Furthermore, when contradictory cues occurred in the same sentence, the local syntactic cue (case-marking) took precedence over the semantic cue (thematic role), and overwrote previous information cued by prosody. The findings indicate that during auditory sentence comprehension the processing system integrates different sources of information for argument role assignment, yet primarily relies on syntactic information.
  • van der Burght, C. L. (2021). The central contribution of prosody to sentence processing: Evidence from behavioural and neuroimaging studies. PhD Thesis, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig.
  • Van Paridon, J., Ostarek, M., Arunkumar, M., & Huettig, F. (2021). Does neuronal recycling result in destructive competition? The influence of learning to read on the recognition of faces. Psychological Science, 32, 459-465. doi:10.1177/0956797620971652.

    Abstract

    Written language, a human cultural invention, is far too recent for dedicated neural
    infrastructure to have evolved in its service. Culturally newly acquired skills (e.g. reading) thus ‘recycle’ evolutionarily older circuits that originally evolved for different, but similar functions (e.g. visual object recognition). The destructive competition hypothesis predicts that this neuronal recycling has detrimental behavioral effects on the cognitive functions a cortical network originally evolved for. In a study with 97 literate, low-literate, and illiterate participants from the same socioeconomic background we find that even after adjusting for cognitive ability and test-taking familiarity, learning to read is associated with an increase, rather than a decrease, in object recognition abilities. These results are incompatible with the claim that neuronal recycling results in destructive competition and consistent with the possibility that learning to read instead fine-tunes general object recognition mechanisms, a hypothesis that needs further neuroscientific investigation.

    Additional information

    supplemental material
  • Van Paridon, J. (2021). Speaking while listening: Language processing in speech shadowing and translation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van Leeuwen, T. M., Wilsson, L., Norrman, H. N., Dingemanse, M., Bölte, S., & Neufeld, J. (2021). Perceptual processing links autism and synesthesia: A co-twin control study. Cortex, 145, 236-249. doi:10.1016/j.cortex.2021.09.016.
  • Van Valin Jr., R. D. (2009). Case in role and reference grammar. In A. Malchukov, & A. Spencer (Eds.), The Oxford handbook of case (pp. 102-120). Oxford University Press.
  • Van Berkum, J. J. A. (2009). Does the N400 directly reflect compositional sense-making? Psychophysiology, Special Issue: Society for Psychophysiological Research Abstracts for the Forty-Ninth Annual Meeting, 46(Suppl. 1), s2.

    Abstract

    A not uncommon assumption in psycholinguistics is that the N400 directly indexes high-level semantic integration, the compositional, word-driven construction of sentence- and discourse-level meaning in some language-relevant unification space. The various discourse- and speaker-dependent modulations of the N400 uncovered by us and others are often taken to support this 'compositional integration' position. In my talk, I will argue that these N400 modulations are probably better interpreted as only indirectly reflecting compositional sense-making. The account that I will advance for these N400 effects is a variant of the classic Kutas and Federmeier (2002, TICS) memory retrieval account in which context effects on the word-elicited N400 are taken to reflect contextual priming of LTM access. It differs from the latter in making more explicit that the contextual cues that prime access to a word's meaning in LTM can range from very simple (e.g., a single concept) to very complex ones (e.g., a structured representation of the current discourse). Furthermore, it incorporates the possibility, suggested by recent N400 findings, that semantic retrieval can also be intensified in response to certain ‘relevance signals’, such as strong value-relevance, or a marked delivery (linguistic focus, uncommon choice of words, etc). In all, the perspective I'll draw is that in the context of discourse-level language processing, N400 effects reflect an 'overlay of technologies', with the construction of discourse-level representations riding on top of more ancient sense-making technology.
  • Van Gijn, R., & Gipper, S. (2009). Irrealis in Yurakaré and other languages: On the cross-linguistic consistency of an elusive category. In L. Hogeweg, H. De Hoop, & A. Malchukov (Eds.), Cross-linguistic semantics of tense, aspect, and modality (pp. 155-178). Amsterdam: Benjamins.

    Abstract

    The linguistic category of irrealis does not show stable semantics across languages. This makes it difficult to formulate general statements about this category, and it has led some researchers to reject irrealis as a cross-linguistically valid category. In this paper we look at the semantics of the irrealis category of Yurakaré, an unclassified language spoken in central Bolivia, and compare it to irrealis semantics of a number of other languages. Languages differ with respect to the subcategories they subsume under the heading of irrealis. The variable subcategories are future tense, imperatives, negatives, and habitual aspect. We argue that the cross-linguistic variation is not random, and can be stated in terms of an implicational scale.
  • Van Valin Jr., R. D. (2009). Privileged syntactic arguments, pivots and controllers. In L. Guerrero, S. Ibáñez, & V. A. Belloro (Eds.), Studies in role and reference grammar (pp. 45-68). Mexico: Universidad Nacional Autónoma de México.
  • Van Putten, S. (2009). Talking about motion in Avatime. Master Thesis, Leiden University.
  • Van Tiel, B., Deliens, G., Geelhand, P., Murillo Oosterwijk, A., & Kissine, M. (2021). Strategic deception in adults with autism spectrum disorder. Journal of Autism and Developmental Disorders, 51, 255-266. doi:10.1007/s10803-020-04525-0.

    Abstract

    Autism Spectrum Disorder (ASD) is often associated with impaired perspective-taking skills. Deception is an important indicator of perspective-taking, and therefore may be thought to pose difficulties to people with ASD (e.g., Baron-Cohen in J Child Psychol Psychiatry 3:1141–1155, 1992). To test this hypothesis, we asked participants with and without ASD to play a computerised deception game. We found that participants with ASD were equally likely—and in complex cases of deception even more likely—to deceive and detect deception, and learned deception at a faster rate. However, participants with ASD initially deceived less frequently, and were slower at detecting deception. These results suggest that people with ASD readily engage in deception but may do so through conscious and effortful reasoning about other people’s perspectiv
  • Van Paridon, J., & Thompson, B. (2021). subs2vec: Word embeddings from subtitles in 55 languages. Behavior Research Methods, 53(2), 629-655. doi:10.3758/s13428-020-01406-3.

    Abstract

    This paper introduces a novel collection of word embeddings, numerical representations of lexical semantics, in 55 languages, trained on a large corpus of pseudo-conversational speech transcriptions from television shows and movies. The embeddings were trained on the OpenSubtitles corpus using the fastText implementation of the skipgram algorithm. Performance comparable with (and in some cases exceeding) embeddings trained on non-conversational (Wikipedia) text is reported on standard benchmark evaluation datasets. A novel evaluation method of particular relevance to psycholinguists is also introduced: prediction of experimental lexical norms in multiple languages. The models, as well as code for reproducing the models and all analyses reported in this paper (implemented as a user-friendly Python package), are freely available at: https://github.com/jvparidon/subs2vec.

    Additional information

    https://github.com/jvparidon/subs2vec
  • Van Valin Jr., R. D. (2009). Role and reference grammar. In F. Brisard, J.-O. Östman, & J. Verschueren (Eds.), Grammar, meaning, and pragmatics (pp. 239-249). Amsterdam: Benjamins.
  • Van de Ven, M., Tucker, B. V., & Ernestus, M. (2009). Semantic context effects in the recognition of acoustically unreduced and reduced words. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (pp. 1867-1870). Causal Productions Pty Ltd.

    Abstract

    Listeners require context to understand the casual pronunciation variants of words that are typical of spontaneous speech (Ernestus et al., 2002). The present study reports two auditory lexical decision experiments, investigating listeners' use of semantic contextual information in the comprehension of unreduced and reduced words. We found a strong semantic priming effect for low frequency unreduced words, whereas there was no such effect for reduced words. Word frequency was facilitatory for all words. These results show that semantic context is relevant especially for the comprehension of unreduced words, which is unexpected given the listener driven explanation of reduction in spontaneous speech.
  • van Hell, J. G., & Witteman, M. J. (2009). The neurocognition of switching between languages: A review of electrophysiological studies. In L. Isurin, D. Winford, & K. de Bot (Eds.), Multidisciplinary approaches to code switching (pp. 53-84). Philadelphia: John Benjamins.

    Abstract

    The seemingly effortless switching between languages and the merging of two languages into a coherent utterance is a hallmark of bilingual language processing, and reveals the flexibility of human speech and skilled cognitive control. That skill appears to be available not only to speakers when they produce language-switched utterances, but also to listeners and readers when presented with mixed language information. In this chapter, we review electrophysiological studies in which Event-Related Potentials (ERPs) are derived from recordings of brain activity to examine the neurocognitive aspects of comprehending and producing mixed language. Topics we discuss include the time course of brain activity associated with language switching between single stimuli and language switching of words embedded in a meaningful sentence context. The majority of ERP studies report that switching between languages incurs neurocognitive costs, but –more interestingly- ERP patterns differ as a function of L2 proficiency and the amount of daily experience with language switching, the direction of switching (switching into L2 is typically associated with higher switching costs than switching into L1), the type of language switching task, and the predictability of the language switch. Finally, we outline some future directions for this relatively new approach to the study of language switching.
  • Van Gijn, R. (2009). The phonology of mixed languages. Journal of Pidgin and Creole Languages, 24(1), 91-117. doi:10.1075/jpcl.24.1.04gij.

    Abstract

    Mixed languages are said to be the result of a process of intertwining (e.g. Bakker & Muysken 1995, Bakker 1997), a regular process in which the grammar of one language is combined with the lexicon of another. However, the outcome of this process differs from language pair to language pair. As far as morphosyntax is concerned, people have discussed these different outcomes and the reasons for them extensively, e.g. Bakker 1997 for Michif, Mous 2003 for Ma’a, Muysken 1997a for Media Lengua and 1997b for Callahuaya. The issue of phonology, however, has not generated a large debate. This paper compares the phonological systems of the mixed languages Media Lengua, Callahuaya, Mednyj Aleut, and Michif. It will be argued that the outcome of the process of intertwining, as far as phonology is concerned, is at least partly determined by the extent to which unmixed phonological domains exist.
  • Van Geert, E., Ding, R., & Wagemans, J. (2024). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts. Advance online publication. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • Van der Werff, J., Ravignani, A., & Jadoul, Y. (2024). thebeat: A Python package for working with rhythms and other temporal sequences. Behavior Research Methods, 56, 3725-3736. doi:10.3758/s13428-023-02334-8.

    Abstract

    thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study.
  • van der Burght, C. L., & Meyer, A. S. (2024). Interindividual variation in weighting prosodic and semantic cues during sentence comprehension – a partial replication of Van der Burght et al. (2021). In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 792-796). doi:10.21437/SpeechProsody.2024-160.

    Abstract

    Contrastive pitch accents can mark sentence elements occupying parallel roles. In “Mary kissed John, not Peter”, a pitch accent on Mary or John cues the implied syntactic role of Peter. Van der Burght, Friederici, Goucha, and Hartwigsen (2021) showed that listeners can build expectations concerning syntactic and semantic properties of upcoming words, derived from pitch accent information they heard previously. To further explore these expectations, we attempted a partial replication of the original German study in Dutch. In the experimental sentences “Yesterday, the police officer arrested the thief, not the inspector/murderer”, a pitch accent on subject or object cued the subject/object role of the ellipsis clause. Contrasting elements were additionally cued by the thematic role typicality of the nouns. Participants listened to sentences in which the ellipsis clause was omitted and selected the most plausible sentence-final noun (presented visually) via button press. Replicating the original study results, listeners based their sentence-final preference on the pitch accent information available in the sentence. However, as in the original study, individual differences between listeners were found, with some following prosodic information and others relying on a structural bias. The results complement the literature on ellipsis resolution and on interindividual variability in cue weighting.
  • Varola*, M., Verga*, L., Sroka, M., Villanueva, S., Charrier, I., & Ravignani, A. (2021). Can harbor seals (Phoca vitulina) discriminate familiar conspecific calls after long periods of separation? PeerJ, 9: e12431. doi:10.7717/peerj.12431.

    Abstract

    * - indicates joint first authorship -
    The ability to discriminate between familiar and unfamiliar calls may play a key role in pinnipeds’ communication and survival, as in the case of mother-pup interactions. Vocal discrimination abilities have been suggested to be more developed in pinniped species with the highest selective pressure such as the otariids; yet, in some group-living phocids, such as harbor seals (Phoca vitulina), mothers are also able to recognize their pup’s voice. Conspecifics’ vocal recognition in pups has never been investigated; however, the repeated interaction occurring between pups within the breeding season suggests that long-term vocal discrimination may occur. Here we explored this hypothesis by presenting three rehabilitated seal pups with playbacks of vocalizations from unfamiliar or familiar pups. It is uncommon for seals to come into rehabilitation for a second time in their lifespan, and this study took advantage of these rare cases. A simple visual inspection of the data plots seemed to show more reactions, and of longer duration, in response to familiar as compared to unfamiliar playbacks in two out of three pups. However, statistical analyses revealed no significant difference between the experimental conditions. We also found no significant asymmetry in orientation (left vs. right) towards familiar and unfamiliar sounds. While statistics do not support the hypothesis of an established ability to discriminate familiar vocalizations from unfamiliar ones in harbor seal pups, further investigations with a larger sample size are needed to confirm or refute this hypothesis.

    Additional information

    dataset
  • Vartiainen, J., Aggujaro, S., Lehtonen, M., Hulten, A., Laine, M., & Salmelin, R. (2009). Neural dynamics of reading morphologically complex words. NeuroImage, 47, 2064-2072. doi:10.1016/j.neuroimage.2009.06.002.

    Abstract

    Despite considerable research interest, it is still an open issue as to how morphologically complex words such as “car+s” are represented and processed in the brain. We studied the neural correlates of the processing of inflected nouns in the morphologically rich Finnish language. Previous behavioral studies in Finnish have yielded a robust inflectional processing cost, i.e., inflected words are harder to recognize than otherwise matched morphologically simple words. Theoretically this effect could stem either from decomposition of inflected words into a stem and a suffix at input level and/or from subsequent recombination at the semantic–syntactic level to arrive at an interpretation of the word. To shed light on this issue, we used magnetoencephalography to reveal the time course and localization of neural effects of morphological structure and frequency of written words. Ten subjects silently read high- and low-frequency Finnish words in inflected and monomorphemic form. Morphological complexity was accompanied by stronger and longerlasting activation of the left superior temporal cortex from 200 ms onwards. Earlier effects of morphology were not found, supporting the view that the well-established behavioral processing cost for inflected words stems from the semantic–syntactic level rather than from early decomposition. Since the effect of morphology was detected throughout the range of word frequencies employed, the majority of inflected Finnish words appears to be represented in decomposed form and only very high-frequency inflected words may acquire full-form representations.
  • Vega-Mendoza, M., Pickering, M. J., & Nieuwland, M. S. (2021). Concurrent use of animacy and event-knowledge during comprehension: Evidence from event-related potentials. Neuropsychologia, 152: 107724. doi:10.1016/j.neuropsychologia.2020.107724.

    Abstract

    In two ERP experiments, we investigated whether readers prioritize animacy over real-world event-knowledge during sentence comprehension. We used the paradigm of Paczynski and Kuperberg (2012), who argued that animacy is prioritized based on the observations that the ‘related anomaly effect’ (reduced N400s for context-related anomalous words compared to unrelated words) does not occur for animacy violations, and that animacy violations but not relatedness violations elicit P600 effects. Participants read passive sentences with plausible agents (e.g., The prescription for the mental disorder was written by the psychiatrist) or implausible agents that varied in animacy and semantic relatedness (schizophrenic/guard/pill/fence). In Experiment 1 (with a plausibility judgment task), plausible sentences elicited smaller N400s relative to all types of implausible sentences. Crucially, animate words elicited smaller N400s than inanimate words, and related words elicited smaller N400s than unrelated words, but Bayesian analysis revealed substantial evidence against an interaction between animacy and relatedness. Moreover, at the P600 time-window, we observed more positive ERPs for animate than inanimate words and for related than unrelated words at anterior regions. In Experiment 2 (without judgment task), we observed an N400 effect with animacy violations, but no other effects. Taken together, the results of our experiments fail to support a prioritized role of animacy information over real-world event-knowledge, but they support an interactive, constraint-based view on incremental semantic processing.
  • Verdonschot, R. G., Han, J.-I., & Kinoshita, S. (2021). The proximate unit in Korean speech production: Phoneme or syllable? Quarterly Journal of Experimental Psychology, 74, 187-198. doi:10.1177/1747021820950239.

    Abstract

    We investigated the “proximate unit” in Korean, that is, the initial phonological unit selected in speech production by Korean speakers. Previous studies have shown mixed evidence indicating either a phoneme-sized or a syllable-sized unit. We conducted two experiments in which participants named pictures while ignoring superimposed non-words. In English, for this task, when the picture (e.g., dog) and distractor phonology (e.g., dark) initially overlap, typically the picture target is named faster. We used a range of conditions (in Korean) varying from onset overlap to syllabic overlap, and the results indicated an important role for the syllable, but not the phoneme. We suggest that the basic unit used in phonological encoding in Korean is different from Germanic languages such as English and Dutch and also from Japanese and possibly also Chinese. Models dealing with the architecture of language production can use these results when providing a framework suitable for all languages in the world, including Korean.
  • Verdonschot, R. G., Van der Wal, J., Lewis, A. G., Knudsen, B., Von Grebmer zu Wolfsthurn, S., Schiller, N. O., & Hagoort, P. (2024). Information structure in Makhuwa: Electrophysiological evidence for a universal processing account. Proceedings of the National Academy of Sciences of the United States of America, 121(30): e2315438121. doi:10.1073/pnas.2315438121.

    Abstract

    There is evidence from both behavior and brain activity that the way information is structured, through the use of focus, can up-regulate processing of focused constituents, likely to give prominence to the relevant aspects of the input. This is hypothesized to be universal, regardless of the different ways in which languages encode focus. In order to test this universalist hypothesis, we need to go beyond the more familiar linguistic strategies for marking focus, such as by means of intonation or specific syntactic structures (e.g., it-clefts). Therefore, in this study, we examine Makhuwa-Enahara, a Bantu language spoken in northern Mozambique, which uniquely marks focus through verbal conjugation. The participants were presented with sentences that consisted of either a semantically anomalous constituent or a semantically nonanomalous constituent. Moreover, focus on this particular constituent could be either present or absent. We observed a consistent pattern: Focused information generated a more negative N400 response than the same information in nonfocus position. This demonstrates that regardless of how focus is marked, its consequence seems to result in an upregulation of processing of information that is in focus.

    Additional information

    supplementary materials
  • Verga, L., & Ravignani, A. (2021). Strange seal sounds: Claps, slaps, and multimodal pinniped rhythms. Frontiers in Ecology and Evolution, 9: 644497. doi:10.3389/fevo.2021.644497.
  • Verga, L., Schwartze, M., Stapert, S., Winkens, I., & Kotz, S. A. (2021). Dysfunctional timing in traumatic brain injury patients: Co-occurrence of cognitive, motor, and perceptual deficits. Frontiers in Psychology, 12: 731898. doi:10.3389/fpsyg.2021.731898.

    Abstract

    Timing is an essential part of human cognition and of everyday life activities, such as walking or holding a conversation. Previous studies showed that traumatic brain injury (TBI) often affects cognitive functions such as processing speed and time-sensitive abilities, causing long-term sequelae as well as daily impairments. However, the existing evidence on timing capacities in TBI is mostly limited to perception and the processing of isolated intervals. It is therefore open whether the observed deficits extend to motor timing and to continuous dynamic tasks that more closely match daily life activities. The current study set out to answer these questions by assessing audio motor timing abilities and their relationship with cognitive functioning in a group of TBI patients (n=15) and healthy matched controls. We employed a comprehensive set of tasks aiming at testing timing abilities across perception and production and from single intervals to continuous auditory sequences. In line with previous research, we report functional impairments in TBI patients concerning cognitive processing speed and perceptual timing. Critically, these deficits extended to motor timing: The ability to adjust to tempo changes in an auditory pacing sequence was impaired in TBI patients, and this motor timing deficit covaried with measures of processing speed. These findings confirm previous evidence on perceptual and cognitive timing deficits resulting from TBI and provide first evidence for comparable deficits in motor behavior. This suggests basic co-occurring perceptual and motor timing impairments that may factor into a wide range of daily activities. Our results thus place TBI into the wider range of pathologies with well-documented timing deficits (such as Parkinson’s disease) and encourage the search for novel timing-based therapeutic interventions (e.g., employing dynamic and/or musical stimuli) with high transfer potential to everyday life activities.

    Additional information

    supplementary material
  • Verhagen, J., & Schimke, S. (2009). Differences or fundamental differences? Zeitschrift für Sprachwissenschaft, 28(1), 97-106. doi:10.1515/ZFSW.2009.011.
  • Verhagen, J. (2009). Finiteness in Dutch as a second language. PhD Thesis, VU University, Amsterdam.
  • Verhagen, J. (2009). Light verbs and the acquisition of finiteness and negation in Dutch as a second language. In C. Dimroth, & P. Jordens (Eds.), Functional categories in learner language (pp. 203-234). Berlin: Mouton de Gruyter.
  • Verhagen, J. (2009). Temporal adverbials, negation and finiteness in Dutch as a second language: A scope-based account. IRAL, 47(2), 209-237. doi:10.1515/iral.2009.009.

    Abstract

    This study investigates the acquisition of post-verbal (temporal) adverbials and post-verbal negation in L2 Dutch. It is based on previous findings for L2 French that post-verbal negation poses less of a problem for L2 learners than post-verbal adverbial placement (Hawkins, Towell, Bazergui, Second Language Research 9: 189-233, 1993; Herschensohn, Minimally raising the verb issue: 325-336, Cascadilla Press, 1998). The current data show that, at first sight, Moroccan and Turkish learners of Dutch also have fewer problems with post-verbal negation than with post-verbal adverbials. However, when a distinction is made between different types of adverbials, it seems that this holds for adverbials of position such as 'today' but not for adverbials of contrast such as 'again'. To account for this difference, it is argued that different types of adverbial occupy different positions in the L2 data for reasons of scope marking. Moreover, the placement of adverbials such as 'again' interacts with the acquisition of finiteness marking (resulting in post-verbal placement), while there is no such interaction between adverbials such as 'today' and finiteness marking.
  • Verhoef, T., & Ravignani, A. (2021). Melodic universals emerge or are sustained through cultural evolution. Frontiers in Psychology, 12: 668300. doi:10.3389/fpsyg.2021.668300.

    Abstract

    To understand why music is structured the way it is, we need an explanation that accounts for both the universality and variability found in musical traditions. Here we test whether statistical universals that have been identified for melodic structures in music can emerge as a result of cultural adaptation to human biases through iterated learning. We use data from an experiment in which artificial whistled systems, where sounds were produced with a slide whistle, were learned by human participants and transmitted multiple times from person to person. These sets of whistled signals needed to be memorized and recalled and the reproductions of one participant were used as the input set for the next. We tested for the emergence of seven different melodic features, such as discrete pitches, motivic patterns, or phrase repetition, and found some evidence for the presence of most of these statistical universals. We interpret this as promising evidence that, similarly to rhythmic universals, iterated learning experiments can also unearth melodic statistical universals. More, ideally cross-cultural, experiments are nonetheless needed. Simulating the cultural transmission of artificial proto-musical systems can help unravel the origins of universal tendencies in musical structures.
  • Verhoef, E., Grove, J., Shapland, C. Y., Demontis, D., Burgess, S., Rai, D., Børglum, A. D., & St Pourcain, B. (2021). Discordant associations of educational attainment with ASD and ADHD implicate a polygenic form of pleiotropy. Nature Communications, 12: 6534. doi:10.1038/s41467-021-26755-1.

    Abstract

    Autism Spectrum Disorder (ASD) and Attention-Deficit/Hyperactivity Disorder (ADHD) are complex co-occurring neurodevelopmental conditions. Their genetic architectures reveal striking similarities but also differences, including strong, discordant polygenic associations with educational attainment (EA). To study genetic mechanisms that present as ASD-related positive and ADHD-related negative genetic correlations with EA, we carry out multivariable regression analyses using genome-wide summary statistics (N = 10,610–766,345). Our results show that EA-related genetic variation is shared across ASD and ADHD architectures, involving identical marker alleles. However, the polygenic association profile with EA, across shared marker alleles, is discordant for ASD versus ADHD risk, indicating independent effects. At the single-variant level, our results suggest either biological pleiotropy or co-localisation of different risk variants, implicating MIR19A/19B microRNA mechanisms. At the polygenic level, they point to a polygenic form of pleiotropy that contributes to the detectable genome-wide correlation between ASD and ADHD and is consistent with effect cancellation across EA-related regions.

    Additional information

    supplementary information
  • Verhoef, E., Shapland, C. Y., Fisher, S. E., Dale, P. S., & St Pourcain, B. (2021). The developmental origins of genetic factors influencing language and literacy: Associations with early-childhood vocabulary. Journal of Child Psychology and Psychiatry, 62(6), 728-738. doi:10.1111/jcpp.13327.

    Abstract

    Background

    The heritability of language and literacy skills increases from early‐childhood to adolescence. The underlying mechanisms are little understood and may involve (a) the amplification of genetic influences contributing to early language abilities, and/or (b) the emergence of novel genetic factors (innovation). Here, we investigate the developmental origins of genetic factors influencing mid‐childhood/early‐adolescent language and literacy. We evaluate evidence for the amplification of early‐childhood genetic factors for vocabulary, in addition to genetic innovation processes.
    Methods

    Expressive and receptive vocabulary scores at 38 months, thirteen language‐ and literacy‐related abilities and nonverbal cognition (7–13 years) were assessed in unrelated children from the Avon Longitudinal Study of Parents and Children (ALSPAC, Nindividuals ≤ 6,092). We investigated the multivariate genetic architecture underlying early‐childhood expressive and receptive vocabulary, and each of 14 mid‐childhood/early‐adolescent language, literacy or cognitive skills with trivariate structural equation (Cholesky) models as captured by genome‐wide genetic relationship matrices. The individual path coefficients of the resulting structural models were finally meta‐analysed to evaluate evidence for overarching patterns.
    Results

    We observed little support for the emergence of novel genetic sources for language, literacy or cognitive abilities during mid‐childhood or early adolescence. Instead, genetic factors of early‐childhood vocabulary, especially those unique to receptive skills, were amplified and represented the majority of genetic variance underlying many of these later complex skills (≤99%). The most predictive early genetic factor accounted for 29.4%(SE = 12.9%) to 45.1%(SE = 7.6%) of the phenotypic variation in verbal intelligence and literacy skills, but also for 25.7%(SE = 6.4%) in performance intelligence, while explaining only a fraction of the phenotypic variation in receptive vocabulary (3.9%(SE = 1.8%)).
    Conclusions

    Genetic factors contributing to many complex skills during mid‐childhood and early adolescence, including literacy, verbal cognition and nonverbal cognition, originate developmentally in early‐childhood and are captured by receptive vocabulary. This suggests developmental genetic stability and overarching aetiological mechanisms.

    Additional information

    supporting information
  • Verhoef, E., Shapland, C. Y., Fisher, S. E., Dale, P. S., & St Pourcain, B. (2021). The developmental genetic architecture of vocabulary skills during the first three years of life: Capturing emerging associations with later-life reading and cognition. PLoS Genetics, 17(2): e1009144. doi:10.1371/journal.pgen.1009144.

    Abstract

    Individual differences in early-life vocabulary measures are heritable and associated with subsequent reading and cognitive abilities, although the underlying mechanisms are little understood. Here, we (i) investigate the developmental genetic architecture of expressive and receptive vocabulary in early-life and (ii) assess timing of emerging genetic associations with mid-childhood verbal and non-verbal skills. We studied longitudinally assessed early-life vocabulary measures (15–38 months) and later-life verbal and non-verbal skills (7–8 years) in up to 6,524 unrelated children from the population-based Avon Longitudinal Study of Parents and Children (ALSPAC) cohort. We dissected the phenotypic variance of rank-transformed scores into genetic and residual components by fitting multivariate structural equation models to genome-wide genetic-relationship matrices. Our findings show that the genetic architecture of early-life vocabulary involves multiple distinct genetic factors. Two of these genetic factors are developmentally stable and also contribute to genetic variation in mid-childhood skills: One genetic factor emerging with expressive vocabulary at 24 months (path coefficient: 0.32(SE = 0.06)) was also related to later-life reading (path coefficient: 0.25(SE = 0.12)) and verbal intelligence (path coefficient: 0.42(SE = 0.13)), explaining up to 17.9% of the phenotypic variation. A second, independent genetic factor emerging with receptive vocabulary at 38 months (path coefficient: 0.15(SE = 0.07)), was more generally linked to verbal and non-verbal cognitive abilities in mid-childhood (reading path coefficient: 0.57(SE = 0.07); verbal intelligence path coefficient: 0.60(0.10); performance intelligence path coefficient: 0.50(SE = 0.08)), accounting for up to 36.1% of the phenotypic variation and the majority of genetic variance in these later-life traits (≥66.4%). Thus, the genetic foundations of mid-childhood reading and cognitive abilities are diverse. They involve at least two independent genetic factors that emerge at different developmental stages during early language development and may implicate differences in cognitive processes that are already detectable during toddlerhood.

    Additional information

    supporting information
  • Verhoef, E. (2021). Why do we change how we speak? Multivariate genetic analyses of language and related traits across development and disorder. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.

    Abstract

    Background

    The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).

    Methods

    We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.

    Results

    Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.

    Conclusions

    The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits.
  • Verkerk, A. (2009). A semantic map of secondary predication. In B. Botma, & J. Van Kampen (Eds.), Linguistics in the Netherlands 2009 (pp. 115-126).
  • Vernes, S. C., Kriengwatana, B. P., Beeck, V. C., Fischer, J., Tyack, P. L., Ten Cate, C., & Janik, V. M. (2021). The multi-dimensional nature of vocal learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200236. doi:10.1098/rstb.2020.0236.

    Abstract

    How learning affects vocalizations is a key question in the study of animal
    communication and human language. Parallel efforts in birds and humans
    have taught us much about how vocal learning works on a behavioural
    and neurobiological level. Subsequent efforts have revealed a variety of
    cases among mammals in which experience also has a major influence on
    vocal repertoires. Janik and Slater (Anim. Behav. 60, 1–11. (doi:10.1006/
    anbe.2000.1410)) introduced the distinction between vocal usage and pro-
    duction learning, providing a general framework to categorize how
    different types of learning influence vocalizations. This idea was built on
    by Petkov and Jarvis (Front. Evol. Neurosci. 4, 12. (doi:10.3389/fnevo.2012.
    00012)) to emphasize a more continuous distribution between limited and
    more complex vocal production learners. Yet, with more studies providing
    empirical data, the limits of the initial frameworks become apparent.
    We build on these frameworks to refine the categorization of vocal learning
    in light of advances made since their publication and widespread agreement
    that vocal learning is not a binary trait. We propose a novel classification
    system, based on the definitions by Janik and Slater, that deconstructs
    vocal learning into key dimensions to aid in understanding the mechanisms
    involved in this complex behaviour. We consider how vocalizations can
    change without learning, and a usage learning framework that considers
    context specificity and timing. We identify dimensions of vocal production
    learning, including the copying of auditory models (convergence/
    divergence on model sounds, accuracy of copying), the degree of change
    (type and breadth of learning) and timing (when learning takes place, the
    length of time it takes and how long it is retained). We consider grey
    areas of classification and current mechanistic understanding of these beha-
    viours. Our framework identifies research needs and will help to inform
    neurobiological and evolutionary studies endeavouring to uncover the
    multi-dimensional nature of vocal learning.
    This article is part of the theme issue ‘Vocal learning in animals and
    humans’.
  • Vernes, S. C., Janik, V. M., Fitch, W. T., & Slater, P. J. B. (Eds.). (2021). Vocal learning in animals and humans [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376.
  • Vernes, S. C., Janik, V. M., Fitch, W. T., & Slater, P. J. B. (2021). Vocal learning in animals and humans. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200234. doi:10.1098/rstb.2020.0234.
  • Vernes, S. C., MacDermot, K. D., Monaco, A. P., & Fisher, S. E. (2009). Assessing the impact of FOXP1 mutations on developmental verbal dyspraxia. European Journal of Human Genetics, 17(10), 1354-1358. doi:10.1038/ejhg.2009.43.

    Abstract

    Neurodevelopmental disorders that disturb speech and language are highly heritable. Isolation of the underlying genetic risk factors has been hampered by complexity of the phenotype and potentially large number of contributing genes. One exception is the identification of rare heterozygous mutations of the FOXP2 gene in a monogenic syndrome characterised by impaired sequencing of articulatory gestures, disrupting speech (developmental verbal dyspraxia, DVD), as well as multiple deficits in expressive and receptive language. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerisation. FOXP1, the most closely related member of this subgroup, can directly interact with FOXP2 and is co-expressed in neural structures relevant to speech and language disorders. Moreover, investigations of songbird orthologues indicate that combinatorial actions of the two proteins may play important roles in vocal learning, leading to the suggestion that human FOXP1 should be considered a strong candidate for involvement in DVD. Thus, in this study, we screened the entire coding region of FOXP1 (exons and flanking intronic sequence) for nucleotide changes in a panel of probands used earlier to detect novel mutations in FOXP2. A non-synonymous coding change was identified in a single proband, yielding a proline-to-alanine change (P215A). However, this was also found in a random control sample. Analyses of non-coding SNP changes did not find any correlation with affection status. We conclude that FOXP1 mutations are unlikely to represent a major cause of DVD.

    Additional information

    ejhg200943x1.pdf
  • Vernes, S. C., & Fisher, S. E. (2009). Unravelling neurogenetic networks implicated in developmental language disorders. Biochemical Society Transactions (London), 37, 1263-1269. doi:10.1042/BST0371263.

    Abstract

    Childhood syndromes disturbing language development are common and display high degrees of heritability. In most cases, the underlying genetic architecture is likely to be complex, involving multiple chromosomal loci and substantial heterogeneity, which makes it difficult to track down the crucial genomic risk factors. Investigation of rare Mendelian phenotypes offers a complementary route for unravelling key neurogenetic pathways. The value of this approach is illustrated by the discovery that heterozygous FOXP2 (where FOX is forkhead box) mutations cause an unusual monogenic disorder, characterized by problems with articulating speech along with deficits in expressive and receptive language. FOXP2 encodes a regulatory protein, belonging to the forkhead box family of transcription factors, known to play important roles in modulating gene expression in development and disease. Functional genetics using human neuronal models suggest that the different FOXP2 isoforms generated by alternative splicing have distinct properties and may act to regulate each other's activity. Such investigations have also analysed the missense and nonsense mutations found in cases of speech and language disorder, showing that they alter intracellular localization, DNA binding and transactivation capacity of the mutated proteins. Moreover, in the brains of mutant mice, aetiological mutations have been found to disrupt the synaptic plasticity of Foxp2-expressing circuitry. Finally, although mutations of FOXP2 itself are rare, the downstream networks which it regulates in the brain appear to be broadly implicated in typical forms of language impairment. Thus, through ongoing identification of regulated targets and interacting co-factors, this gene is providing the first molecular entry points into neural mechanisms that go awry in language-related disorders
  • De Vignemont, F., Majid, A., Jola, C., & Haggard, P. (2009). Segmenting the body into parts: Evidence from biases in tactile perception. Quarterly Journal of Experimental Psychology, 62, 500-512. doi:10.1080/17470210802000802.

    Abstract

    How do we individuate body parts? Here, we investigated the effect of body segmentation between hand and arm in tactile and visual perception. In a first experiment, we showed that two tactile stimuli felt farther away when they were applied across the wrist than when they were applied within a single body part (palm or forearm), indicating a “category boundary effect”. In the following experiments, we excluded two hypotheses, which attributed tactile segmentation to other, nontactile factors. In Experiment 2, we showed that the boundary effect does not arise from motor cues. The effect was reduced during a motor task involving flexion and extension movements of the wrist joint. Action brings body parts together into functional units, instead of pulling them apart. In Experiments 3 and 4, we showed that the effect does not arise from perceptual cues of visual discontinuities. We did not find any segmentation effect for the visual percept of the body in Experiment 3, nor for a neutral shape in Experiment 4. We suggest that the mental representation of the body is structured in categorical body parts delineated by joints, and that this categorical representation modulates tactile spatial perception.

Share this page