Publications

Displaying 1201 - 1300 of 1394
  • Stivers, T., Chalfoun, A., & Rossi, G. (2024). To err is human but to persist is diabolical: Toward a theory of interactional policing. Frontiers in Sociology: Sociological Theory, 9: 1369776. doi:10.3389/fsoc.2024.1369776.

    Abstract

    Social interaction is organized around norms and preferences that guide our construction of actions and our interpretation of those of others, creating a reflexive moral order. Sociological theory suggests two possibilities for the type of moral order that underlies the policing of interactional norm and preference violations: a morality that focuses on the nature of violations themselves and a morality that focuses on the positioning of actors as they maintain their conduct comprehensible, even when they depart from norms and preferences. We find that actors are more likely to reproach interactional violations for which an account is not provided by the transgressor, and that actors weakly reproach or let pass first offenses while more strongly policing violators who persist in bad behavior. Based on these findings, we outline a theory of interactional policing that rests not on the nature of the violation but rather on actors' moral positioning.
  • Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2019). Bilingual preschoolers’ speech is associated with non-native maternal language input. Language Learning and Development, 15(1), 75-100. doi:10.1080/15475441.2018.1533473.

    Abstract

    Bilingual children are often exposed to non-native speech through their parents. Yet, little is known about the relation between bilingual preschoolers’ speech production and their speech input. The present study investigated the production of voice onset time (VOT) by Dutch-German bilingual preschoolers and their sequential bilingual mothers. The findings reveal an association between maternal VOT and bilingual children’s VOT in the heritage language German as well as in the majority language Dutch. By contrast, no input-production association was observed in the VOT production of monolingual German-speaking children and monolingual Dutch-speaking children. The results of this study provide the first empirical evidence that non-native and attrited maternal speech contributes to the often-observed linguistic differences between bilingual children and their monolingual peers.
  • Striano, T., & Liszkowski, U. (2005). Sensitivity to the context of facial expression in the still face at 3-, 6-, and 9-months of age. Infant Behavior and Development, 28(1), 10-19. doi:10.1016/j.infbeh.2004.06.004.

    Abstract

    Thirty-eight 3-, 6-, and 9-month-old infants interacted in a face to face situation with a female stranger who disrupted the on-going interaction with 30 s Happy and Neutral still face episodes. Three- and 6-month-olds manifested a robust still face response for gazing and smiling. For smiling, 9-month-olds manifested a floor effect such that no still face effect could be shown. For gazing, 9-month-olds' still face response was modulated by the context of interaction such that it was less pronounced if a happy still face was presented first. The findings point to a developmental transition by the end of the first year, whereby infants' still face response becomes increasingly influenced by the context of social interaction. (C) 2004 Published by Elsevier Inc. [References: 35]
  • Sulpizio, S., & McQueen, J. M. (2012). Italians use abstract knowledge about lexical stress during spoken-word recognition. Journal of Memory and Language, 66, 177-193. doi:10.1016/j.jml.2011.08.001.

    Abstract

    In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word’s stress pattern rapidly in word recognition, but only for words with antepenultimate stress. Words with penultimate stress – the most common pattern – appeared to be recognized by default. In Experiment 2, listeners had to learn new words from which some stress cues had been removed, and then recognize reduced- and full-cue versions of those words. The acoustic manipulation affected recognition only of newly-learnt words with antepenultimate stress: Full-cue versions, even though they were never heard during training, were recognized earlier than reduced-cue versions. Newly-learnt words with penultimate stress were recognized earlier overall, but recognition of the two versions of these words did not differ. Abstract knowledge (i.e., knowledge generalized over the lexicon) about lexical stress – which pattern is the default and which cues signal the non-default pattern – appears to be used during the recognition of known and newly-learnt Italian words.
  • Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2012). Development of locative expressions by Turkish deaf and hearing children: Are there modality effects? In A. K. Biller, E. Y. Chung, & A. E. Kimball (Eds.), Proceedings of the 36th Annual Boston University Conference on Language Development (BUCLD 36) (pp. 568-580). Boston: Cascadilla Press.
  • Svantesson, J.-O., Burenhult, N., Holmer, A., Karlsson, A., & Lundström, H. (2012). Humanities of the lesser-known: An overview. Language Documentation and Description, 10, 5-11.
  • Svantesson, J.-O., Burenhult, N., Holmer, A., Karlsson, A., & Lundström, H. (Eds.). (2012). Humanities of the lesser-known: New directions in the description, documentation and typology of endangered languages and musics [Special Issue]. Language Documentation and Description, 10.
  • De Swart, P., & Van Bergen, G. (2019). How animacy and verbal information influence V2 sentence processing: Evidence from eye movements. Open Linguistics, 5(1), 630-649. doi:10.1515/opli-2019-0035.

    Abstract

    There exists a clear association between animacy and the grammatical function of transitive subject. The grammar of some languages require the transitive subject to be high in animacy, or at least higher than the object. A similar animacy preference has been observed in processing studies in languages without such a categorical animacy effect. This animacy preference has been mainly established in structures in which either one or both arguments are provided before the verb. Our goal was to establish (i) whether this preference can already be observed before any argument is provided, and (ii) whether this preference is mediated by verbal information. To this end we exploited the V2 property of Dutch which allows the verb to precede its arguments. Using a visual-world eye-tracking paradigm we presented participants with V2 structures with either an auxiliary (e.g. Gisteren heeft X … ‘Yesterday, X has …’) or a lexical main verb (e.g. Gisteren motiveerde X … ‘Yesterday, X motivated …’) and we measured looks to the animate referent. The results indicate that the animacy preference can already be observed before arguments are presented and that the selectional restrictions of the verb mediate this bias, but do not override it completely.
  • Swingley, D. (2005). Statistical clustering and the contents of the infant vocabulary. Cognitive Psychology, 50(1), 86-132. doi:10.1016/j.cogpsych.2004.06.001.

    Abstract

    Infants parse speech into word-sized units according to biases that develop in the first year. One bias, present before the age of 7 months, is to cluster syllables that tend to co-occur. The present computational research demonstrates that this statistical clustering bias could lead to the extraction of speech sequences that are actual words, rather than missegmentations. In English and Dutch, these word-forms exhibit the strong–weak (trochaic) pattern that guides lexical segmentation after 8 months, suggesting that the trochaic parsing bias is learned as a generalization from statistically extracted bisyllables, and not via attention to short utterances or to high-frequency bisyllables. Extracted word-forms come from various syntactic classes, and exhibit distributional characteristics enabling rudimentary sorting of words into syntactic categories. The results highlight the importance of infants’ first year in language learning: though they may know the meanings of very few words, infants are well on their way to building a vocabulary.
  • Swingley, D. (2005). 11-month-olds' knowledge of how familiar words sounds. Developmental Science, 8(5), 432-443. doi:10.1111/j.1467-7687.2005.00432.

    Abstract

    During the first year of life, infants' perception of speech becomes tuned to the phonology of the native language, as revealed in laboratory discrimination and categorization tasks using syllable stimuli. However, the implications of these results for the development of the early vocabulary remain controversial, with some results suggesting that infants retain only vague, sketchy phonological representations of words. Five experiments using a preferential listening procedure tested Dutch 11-month-olds' responses to word, nonword and mispronounced-word stimuli. Infants listened longer to words than nonwords, but did not exhibit this response when words were mispronounced at onset or at offset. In addition, infants preferred correct pronunciations to onset mispronunciations. The results suggest that infants' encoding of familiar words includes substantial phonological detail.
  • Swinney, D. A., Zurif, E. B., & Cutler, A. (1980). Effects of sentential stress and word class upon comprehension in Broca’s aphasics. Brain and Language, 10, 132-144. doi:10.1016/0093-934X(80)90044-9.

    Abstract

    The roles which word class (open/closed) and sentential stress play in the sentence comprehension processes of both agrammatic (Broca's) aphasics and normal listeners were examined with a word monitoring task. Overall, normal listeners responded more quickly to stressed than to unstressed items, but showed no effect of word class. Aphasics also responded more quickly to stressed than to unstressed materials, but, unlike the normals, responded faster to open than to closed class words regardless of their stress. The results are interpreted as support for the theory that Broca's aphasics lack the functional underlying open/closed class word distinction used in word recognition by normal listeners.
  • Taal, H. R., St Pourcain, B., Thiering, E., Das, S., Mook-Kanamori, D. O., Warrington, N. M., Kaakinen, M., Kreiner-Møller, E., Bradfield, J. P., Freathy, R. M., Geller, F., Guxens, M., Cousminer, D. L., Kerkhof, M., Timpson, N. J., Ikram, M. A., Beilin, L. J., Bønnelykke, K., Buxton, J. L., Charoen, P. and 68 moreTaal, H. R., St Pourcain, B., Thiering, E., Das, S., Mook-Kanamori, D. O., Warrington, N. M., Kaakinen, M., Kreiner-Møller, E., Bradfield, J. P., Freathy, R. M., Geller, F., Guxens, M., Cousminer, D. L., Kerkhof, M., Timpson, N. J., Ikram, M. A., Beilin, L. J., Bønnelykke, K., Buxton, J. L., Charoen, P., Chawes, B. L. K., Eriksson, J., Evans, D. M., Hofman, A., Kemp, J. P., Kim, C. E., Klopp, N., Lahti, J., Lye, S. J., McMahon, G., Mentch, F. D., Müller-Nurasyid, M., O'Reilly, P. F., Prokopenko, I., Rivadeneira, F., Steegers, E. A. P., Sunyer, J., Tiesler, C., Yaghootkar, H., Breteler, M. M. B., Decarli, C., Breteler, M. M. B., Debette, S., Fornage, M., Gudnason, V., Launer, L. J., van der Lugt, A., Mosley, T. H., Seshadri, S., Smith, A. V., Vernooij, M. W., Blakemore, A. I. F., Chiavacci, R. M., Feenstra, B., Fernandez-Banet, J., Grant, S. F. A., Hartikainen, A.-L., van der Heijden, A. J., Iñiguez, C., Lathrop, M., McArdle, W. L., Mølgaard, A., Newnham, J. P., Palmer, L. J., Palotie, A., Pouta, A., Ring, S. M., Sovio, U., Standl, M., Uitterlinden, A. G., Wichmann, H.-E., Vissing, N. H., DeCarli, C., van Duijn, C. M., McCarthy, M. I., Koppelman, G. H., Estivill, X., Hattersley, A. T., Melbye, M., Bisgaard, H., Pennell, C. E., Widen, E., Hakonarson, H., Smith, G. D., Heinrich, J., Jarvelin, M.-R., Jaddoe, V. W. V., The Cohorts for Heart and Aging Research in Genetic Epidemiology (CHARGE) Consortium, EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium, & Early Growth Genetics (EGG) Consortium (2012). Common variants at 12q15 and 12q24 are associated with infant head circumference. Nature Genetics, 44(5), 532-538. doi:10.1038/ng.2238.

    Abstract

    To identify genetic variants associated with head circumference in infancy, we performed a meta-analysis of seven genome-wide association studies (GWAS) (N = 10,768 individuals of European ancestry enrolled in pregnancy and/or birth cohorts) and followed up three lead signals in six replication studies (combined N = 19,089). rs7980687 on chromosome 12q24 (P = 8.1 × 10(-9)) and rs1042725 on chromosome 12q15 (P = 2.8 × 10(-10)) were robustly associated with head circumference in infancy. Although these loci have previously been associated with adult height, their effects on infant head circumference were largely independent of height (P = 3.8 × 10(-7) for rs7980687 and P = 1.3 × 10(-7) for rs1042725 after adjustment for infant height). A third signal, rs11655470 on chromosome 17q21, showed suggestive evidence of association with head circumference (P = 3.9 × 10(-6)). SNPs correlated to the 17q21 signal have shown genome-wide association with adult intracranial volume, Parkinson's disease and other neurodegenerative diseases, indicating that a common genetic variant in this region might link early brain growth with neurological disease in later life.
  • Takashima, A., Bakker-Marshall, I., Van Hell, J. G., McQueen, J. M., & Janzen, G. (2019). Neural correlates of word learning in children. Developmental Cognitive Neuroscience, 37: 100647. doi:10.1016/j.dcn.2019.100649.

    Abstract

    Memory representations of words are thought to undergo changes with consolidation: Episodic memories of novel words are transformed into lexical representations that interact with other words in the mental dictionary. Behavioral studies have shown that this lexical integration process is enhanced when there is more time for consolidation. Neuroimaging studies have further revealed that novel word representations are initially represented in a hippocampally-centered system, whereas left posterior middle temporal cortex activation increases with lexicalization. In this study, we measured behavioral and brain responses to newly-learned words in children. Two groups of Dutch children, aged between 8-10 and 14-16 years, were trained on 30 novel Japanese words depicting novel concepts. Children were tested on word-forms, word-meanings, and the novel words’ influence on existing word processing immediately after training, and again after a week. In line with the adult findings, hippocampal involvement decreased with time. Lexical integration, however, was not observed immediately or after a week, neither behaviorally nor neurally. It appears that time alone is not always sufficient for lexical integration to occur. We suggest that other factors (e.g., the novelty of the concepts and familiarity with the language the words are derived from) might also influence the integration process.

    Additional information

    Supplementary data
  • Takashima, A., & Verhoeven, L. (2019). Radical repetition effects in beginning learners of Chinese as a foreign language reading. Journal of Neurolinguistics, 50, 71-81. doi:10.1016/j.jneuroling.2018.03.001.

    Abstract

    The aim of the present study was to examine whether repetition of radicals during training of Chinese characters leads to better word acquisition performance in beginning learners of Chinese as a foreign language. Thirty Dutch university students were trained on 36 Chinese one-character words for their pronunciations and meanings. They were also exposed to the specifics of the radicals, that is, for phonetic radicals, the associated pronunciation was explained, and for semantic radicals the associated categorical meanings were explained. Results showed that repeated exposure to phonetic and semantic radicals through character pronunciation and meaning trainings indeed induced better understanding of those radicals that were shared among different characters. Furthermore, characters in the training set that shared phonetic radicals were pronounced better than those that did not. Repetition of semantic radicals across different characters, however, hindered the learning of exact meanings. Students generally confused the meanings of other characters that shared the semantic radical. The study shows that in the initial stage of learning, overlapping information of the shared radicals are effectively learned. Acquisition of the specifics of individual characters, however, requires more training.

    Additional information

    Supplementary data
  • Takashima, A., Carota, F., Schoots, V., Redmann, A., Jehee, J., & Indefrey, P. (2024). Tomatoes are red: The perception of achromatic objects elicits retrieval of associated color knowledge. Journal of Cognitive Neuroscience, 36(1), 24-45. doi:10.1162/jocn_a_02068.

    Abstract

    When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.
  • Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.

    Abstract

    This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.

    Additional information

    appendix 1-3
  • Ten Bosch, L., Mulder, K., & Boves, L. (2019). Phase synchronization between EEG signals as a function of differences between stimuli characteristics. In Proceedings of Interspeech 2019 (pp. 1213-1217). doi:10.21437/Interspeech.2019-2443.

    Abstract

    The neural processing of speech leads to specific patterns in the brain which can be measured as, e.g., EEG signals. When properly aligned with the speech input and averaged over many tokens, the Event Related Potential (ERP) signal is able to differentiate specific contrasts between speech signals. Well-known effects relate to the difference between expected and unexpected words, in particular in the N400, while effects in N100 and P200 are related to attention and acoustic onset effects. Most EEG studies deal with the amplitude of EEG signals over time, sidestepping the effect of phase and phase synchronization. This paper investigates the relation between phase in the EEG signals measured in an auditory lexical decision task by Dutch participants listening to full and reduced English word forms. We show that phase synchronization takes place across stimulus conditions, and that the so-called circular variance is narrowly related to the type of contrast between stimuli.
  • ten Bosch, L., & Scharenborg, O. (2005). ASR decoding in a computational model of human word recognition. In Interspeech'2005 - Eurospeech, 9th European Conference on Speech Communication and Technology (pp. 1241-1244). ISCA Archive.

    Abstract

    This paper investigates the interaction between acoustic scores and symbolic mismatch penalties in multi-pass speech decoding techniques that are based on the creation of a segment graph followed by a lexical search. The interaction between acoustic and symbolic mismatches determines to a large extent the structure of the search space of these multipass approaches. The background of this study is a recently developed computational model of human word recognition, called SpeM. SpeM is able to simulate human word recognition data and is built as a multi-pass speech decoder. Here, we focus on unravelling the structure of the search space that is used in SpeM and similar decoding strategies. Finally, we elaborate on the close relation between distances in this search space, and distance measures in search spaces that are based on a combination of acoustic and phonetic features.
  • Ten Bosch, L., & Scharenborg, O. (2012). Modeling cue trading in human word recognition. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 2003-2006).

    Abstract

    Classical phonetic studies have shown that acoustic-articulatory cues can be interchanged without affecting the resulting phoneme percept (‘cue trading’). Cue trading has so far mainly been investigated in the context of phoneme identification. In this study, we investigate cue trading in word recognition, because words are the units of speech through which we communicate. This paper aims to provide a method to quantify cue trading effects by using a computational model of human word recognition. This model takes the acoustic signal as input and represents speech using articulatory feature streams. Importantly, it allows cue trading and underspecification. Its set-up is inspired by the functionality of Fine-Tracker, a recent computational model of human word recognition. This approach makes it possible, for the first time, to quantify cue trading in terms of a trade-off between features and to investigate cue trading in the context of a word recognition task.
  • Ten Oever, S., & Sack, A. T. (2019). Interactions between rhythmic and feature predictions to create parallel time-content associations. Frontiers in Neuroscience, 13: 791. doi:10.3389/fnins.2019.00791.

    Abstract

    The brain is inherently proactive, constantly predicting the when (moment) and what (content) of future input in order to optimize information processing. Previous research on such predictions has mainly studied the “when” or “what” domain separately, missing to investigate the potential integration of both types of predictive information. In the absence of such integration, temporal cues are assumed to enhance any upcoming content at the predicted moment in time (general temporal predictor). However, if the when and what prediction domain were integrated, a much more flexible neural mechanism may be proposed in which temporal-feature interactions would allow for the creation of multiple concurrent time-content predictions (parallel time-content predictor). Here, we used a temporal association paradigm in two experiments in which sound identity was systematically paired with a specific time delay after the offset of a rhythmic visual input stream. In Experiment 1, we revealed that participants associated the time delay of presentation with the identity of the sound. In Experiment 2, we unexpectedly found that the strength of this temporal association was negatively related to the EEG steady-state evoked responses (SSVEP) in preceding trials, showing that after high neuronal responses participants responded inconsistent with the time-content associations, similar to adaptation mechanisms. In this experiment, time-content associations were only present for low SSVEP responses in previous trials. These results tentatively show that it is possible to represent multiple time-content paired predictions in parallel, however, future research is needed to investigate this interaction further.
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proceedings of the National Academy of Sciences of the United States of America, 121(3): e2320489121. doi:10.1073/pnas.2320489121.

    Abstract

    Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
  • Ter Bekke, M., Ozyurek, A., & Ünal, E. (2019). Speaking but not gesturing predicts motion event memory within and across languages. In A. Goel, C. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2940-2946). Montreal, QB: Cognitive Science Society.

    Abstract

    In everyday life, people see, describe and remember motion events. We tested whether the type of motion event information (path or manner) encoded in speech and gesture predicts which information is remembered and if this varies across speakers of typologically different languages. We focus on intransitive motion events (e.g., a woman running to a tree) that are described differently in speech and co-speech gesture across languages, based on how these languages typologically encode manner and path information (Kita & Özyürek, 2003; Talmy, 1985). Speakers of Dutch (n = 19) and Turkish (n = 22) watched and described motion events. With a surprise (i.e. unexpected) recognition memory task, memory for manner and path components of these events was measured. Neither Dutch nor Turkish speakers’ memory for manner went above chance levels. However, we found a positive relation between path speech and path change detection: participants who described the path during encoding were more accurate at detecting changes to the path of an event during the memory task. In addition, the relation between path speech and path memory changed with native language: for Dutch speakers encoding path in speech was related to improved path memory, but for Turkish speakers no such relation existed. For both languages, co-speech gesture did not predict memory speakers. We discuss the implications of these findings for our understanding of the relations between speech, gesture, type of encoding in language and memory.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2005). The acquisition of auxiliary syntax: BE and HAVE. Cognitive Linguistics, 16(1), 247-277. doi:10.1515/cogl.2005.16.1.247.

    Abstract

    This study examined patterns of auxiliary provision and omission for the auxiliaries BE and HAVE in a longitudinal data set from 11 children between the ages of two and three years. Four possible explanations for auxiliary omission—a lack of lexical knowledge, performance limitations in production, the Optional Infinitive hypothesis, and patterns of auxiliary use in the input—were examined. The data suggest that although none of these accounts provides a full explanation for the pattern of auxiliary use and nonuse observed in children's early speech, integrating input-based and lexical learning-based accounts of early language acquisition within a constructivist approach appears to provide a possible framework in which to understand the patterns of auxiliary use found in the children's speech. The implications of these findings for models of children's early language acquisition are discussed.
  • Thiebaut de Schotten, M., Friedrich, P., & Forkel, S. J. (2019). One size fits all does not apply to brain lateralisation. Physics of Life Reviews, 30, 30-33. doi:10.1016/j.plrev.2019.07.007.

    Abstract

    Our understanding of the functioning of the brain is primarily based on an average model of the brain's functional organisation, and any deviation from the standard is considered as random noise or a pathological appearance. Studying pathologies has, however, greatly contributed to our understanding of brain functions. For instance, the study of naturally-occurring or surgically-induced brain lesions revealed that language is predominantly lateralised to the left hemisphere while perception/action and emotion are commonly lateralised to the right hemisphere. The lateralisation of function was subsequently replicated by task-related functional neuroimaging in the healthy population. Despite its high significance and reproducibility, this pattern of lateralisation of function is true for most, but not all participants. Bilateral and flipped representations of classically lateralised functions have been reported during development and in the healthy adult population for language, perception/action and emotion. Understanding these different functional representations at an individual level is crucial to improve the sophistication of our models and account for the variance in developmental trajectories, cognitive performance differences and clinical recovery. With the availability of in vivo neuroimaging, it has become feasible to study large numbers of participants and reliably characterise individual differences, also referred to as phenotypes. Yet, we are at the beginning of inter-individual variability modelling, and new theories of brain function will have to account for these differences across participants.
  • Thomaz, A. L., Lieven, E., Cakmak, M., Chai, J. Y., Garrod, S., Gray, W. D., Levinson, S. C., Paiva, A., & Russwinkel, N. (2019). Interaction for task instruction and learning. In K. A. Gluck, & J. E. Laird (Eds.), Interactive task learning: Humans, robots, and agents acquiring new tasks through natural interactions (pp. 91-110). Cambridge, MA: MIT Press.
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Tilot, A. K., Vino, A., Kucera, K. S., Carmichael, D. A., Van den Heuvel, L., Den Hoed, J., Sidoroff-Dorso, A. V., Campbell, A., Porteous, D. J., St Pourcain, B., Van Leeuwen, T. M., Ward, J., Rouw, R., Simner, J., & Fisher, S. E. (2019). Investigating genetic links between grapheme-colour synaesthesia and neuropsychiatric traits. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190026. doi:10.1098/rstb.2019.0026.

    Abstract

    Synaesthesia is a neurological phenomenon affecting perception, where triggering stimuli (e.g. letters and numbers) elicit unusual secondary sensory experiences (e.g. colours). Family-based studies point to a role for genetic factors in the development of this trait. However, the contributions of common genomic variation to synaesthesia have not yet been investigated. Here, we present the SynGenes cohort, the largest genotyped collection of unrelated people with grapheme–colour synaesthesia (n = 723). Synaesthesia has been associated with a range of other neuropsychological traits, including enhanced memory and mental imagery, as well as greater sensory sensitivity. Motivated by the prior literature on putative trait overlaps, we investigated polygenic scores derived from published genome-wide scans of schizophrenia and autism spectrum disorder (ASD), comparing our SynGenes cohort to 2181 non-synaesthetic controls. We found a very slight association between schizophrenia polygenic scores and synaesthesia (Nagelkerke's R2 = 0.0047, empirical p = 0.0027) and no significant association for scores related to ASD (Nagelkerke's R2 = 0.00092, empirical p = 0.54) or body mass index (R2 = 0.00058, empirical p = 0.60), included as a negative control. As sample sizes for studying common genomic variation continue to increase, genetic investigations of the kind reported here may yield novel insights into the shared biology between synaesthesia and other traits, to complement findings from neuropsychology and brain imaging.

    Files private

    Request files
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Torreira, F. (2012). Investigating the nature of aspirated stops in Western Andalusian Spanish. Journal of the International Phonetic Association, 42, 49-63. doi:10.1017/S0025100311000491.

    Abstract

    In Western Andalusian Spanish (WAS), [h + voiceless stop] clusters are realized as long pre- and postaspirated stops. This study investigates if a new class of stops (realized as geminates with variable degrees of pre- and postaspiration) has emerged in this dialect, or if postaspiration in these clusters results from articulatory overlap. An experiment was carried out in which WAS speakers produced [h + voiceless stop] clusters under changes in speech rate and stress location. The duration of postaspiration, measured as voice onset, did not show systematic effects of any of the experimental variables. Moreover, trade-offs were observed between voice onset and preaspiration plus closure durations. These results indicate that postaspiration in WAS [h + voiceless stop] clusters is the consequence of extensive articulatory overlap. It is further hypothesized that the lengthening of closures in WAS stops preceded by [h] results from a different gestural mechanism affecting all [hC] clusters in this dialect. From a broader perspective, since extensive overlap and consonantal lengthening do not occur in the [hC] clusters of other Spanish varieties, these findings lend support to the idea that intergestural coordination patterns can be dialect-specific.
  • Torreira, F., & Ernestus, M. (2012). Weakening of intervocalic /s/ in the Nijmegen Corpus of Casual Spanish. Phonetica, 69, 124-148. doi:10.1159/000343635.
  • Tourtouri, E. N., Delogu, F., Sikos, L., & Crocker, M. W. (2019). Rational over-specification in visually-situated comprehension and production. Journal of Cultural Cognitive Science, 3(2), 175-202. doi:10.1007/s41809-019-00032-6.

    Abstract

    Contrary to the Gricean maxims of quantity (Grice, in: Cole, Morgan (eds) Syntax and semantics: speech acts, vol III, pp 41–58, Academic Press, New York, 1975), it has been repeatedly shown that speakers often include redundant information in their utterances (over-specifications). Previous research on referential communication has long debated whether this redundancy is the result of speaker-internal or addressee-oriented processes, while it is also unclear whether referential redundancy hinders or facilitates comprehension. We present an information-theoretic explanation for the use of over-specification in visually-situated communication, which quantifies the amount of uncertainty regarding the referent as entropy (Shannon in Bell Syst Tech J 5:10, https://doi.org/10.1002/j.1538-7305.1948.tb01338.x, 1948). Examining both the comprehension and production of over-specifications, we present evidence that (a) listeners’ processing is facilitated by the use of redundancy as well as by a greater reduction of uncertainty early on in the utterance, and (b) that at least for some speakers, listeners’ processing concerns influence their encoding of over-specifications: Speakers were more likely to use redundant adjectives when these adjectives reduced entropy to a higher degree than adjectives necessary for target identification.
  • Trilsbeek, P., & Wittenburg, P. (2005). Archiving challenges. In J. Gippert, N. Himmelmann, & U. Mosel (Eds.), Essentials of language documentation (pp. 311-335). Berlin: Mouton de Gruyter.
  • Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2019). Learning to produce difficult L2 vowels: The effects of awareness-rasing, exposure and feedback. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 1094-1098). Canberra, Australia: Australasian Speech Science and Technology Association Inc.
  • Trujillo, J. P., Vaitonyte, J., Simanova, I., & Ozyurek, A. (2019). Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research. Behavior Research Methods, 51(2), 769-777. doi:10.3758/s13428-018-1086-8.

    Abstract

    Action, gesture, and sign represent unique aspects of human communication that use form and movement to convey meaning. Researchers typically use manual coding of video data to characterize naturalistic, meaningful movements at various levels of description, but the availability of markerless motion-tracking technology allows for quantification of the kinematic features of gestures or any meaningful human movement. We present a novel protocol for extracting a set of kinematic features from movements recorded with Microsoft Kinect. Our protocol captures spatial and temporal features, such as height, velocity, submovements/strokes, and holds. This approach is based on studies of communicative actions and gestures and attempts to capture features that are consistently implicated as important kinematic aspects of communication. We provide open-source code for the protocol, a description of how the features are calculated, a validation of these features as quantified by our protocol versus manual coders, and a discussion of how the protocol can be applied. The protocol effectively quantifies kinematic features that are important in the production (e.g., characterizing different contexts) as well as the comprehension (e.g., used by addressees to understand intent and semantics) of manual acts. The protocol can also be integrated with qualitative analysis, allowing fast and objective demarcation of movement units, providing accurate coding even of complex movements. This can be useful to clinicians, as well as to researchers studying multimodal communication or human–robot interactions. By making this protocol available, we hope to provide a tool that can be applied to understanding meaningful movement characteristics in human communication.
  • Trujillo, J. P. (2024). Motion-tracking technology for the study of gesture. In A. Cienki (Ed.), The Cambridge Handbook of Gesture Studies. Cambridge: Cambridge University Press.
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review, 31, 1723-1734. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Truong, D. T., Adams, A. K., Paniagua, S., Frijters, J. C., Boada, R., Hill, D. E., Lovett, M. W., Mahone, E. M., Willcutt, E. G., Wolf, M., Defries, J. C., Gialluisi, A., Francks, C., Fisher, S. E., Olson, R. K., Pennington, B. F., Smith, S. D., Bosson-Heenan, J., & Gruen, J. R. (2019). Multivariate genome-wide association study of rapid automatised naming and rapid alternating stimulus in Hispanic American and African–American youth. Journal of Medical Genetics, 56(8), 557-566. doi:10.1136/jmedgenet-2018-105874.

    Abstract

    Background Rapid automatised naming (RAN) and rapid alternating stimulus (RAS) are reliable predictors of reading disability. The underlying biology of reading disability is poorly understood. However, the high correlation among RAN, RAS and reading could be attributable to shared genetic factors that contribute to common biological mechanisms.

    Objective To identify shared genetic factors that contribute to RAN and RAS performance using a multivariate approach.

    Methods We conducted a multivariate genome-wide association analysis of RAN Objects, RAN Letters and RAS Letters/Numbers in a sample of 1331 Hispanic American and African–American youth. Follow-up neuroimaging genetic analysis of cortical regions associated with reading ability in an independent sample and epigenetic examination of extant data predicting tissue-specific functionality in the brain were also conducted.

    Results Genome-wide significant effects were observed at rs1555839 (p=4.03×10−8) and replicated in an independent sample of 318 children of European ancestry. Epigenetic analysis and chromatin state models of the implicated 70 kb region of 10q23.31 support active transcription of the gene RNLS in the brain, which encodes a catecholamine metabolising protein. Chromatin contact maps of adult hippocampal tissue indicate a potential enhancer–promoter interaction regulating RNLS expression. Neuroimaging genetic analysis in an independent, multiethnic sample (n=690) showed that rs1555839 is associated with structural variation in the right inferior parietal lobule.

    Conclusion This study provides support for a novel trait locus at chromosome 10q23.31 and proposes a potential gene–brain–behaviour relationship for targeted future functional analysis to understand underlying biological mechanisms for reading disability.

    Additional information

    Supplementary data
  • Tsoi, E. Y. L., Yang, W., Chan, A. W. S., & Kidd, E. (2019). Mandarin-English speaking bilingual and Mandarin speaking monolingual children’s comprehension of relative clauses. Applied Psycholinguistics, 40(4), 933-964. doi:10.1017/S0142716419000079.

    Abstract

    The current study investigated the comprehension of subject and object relative clauses (RCs) in bilingual Mandarin-English children (N = 55, Mage = 7;5, SD = 1;8) and language-matched monolingual Mandarin-speaking children (N = 59, Mage = 5;4, SD = 0;7). The children completed a referent selection task that tested their comprehension of subject and object RCs, and standardised assessments of vocabulary knowledge. Results showed a very similar pattern of responding in both groups. In comparison to past studies of Cantonese, the bilingual and monolingual children both showed a significant subject-over-object RC advantage. An error analysis suggested that the children’s difficulty with object RCs reflected the tendency to interpret the sentential subject as the head noun. A subsequent corpus analysis suggested that children’s difficulty with object RCs may be in part due to distributional information favouring subject RC analyses. Individual differences analyses suggested cross-linguistic transfer from English to Mandarin in the bilingual children at the individual but not the group level, with the results indicating that comparative English-dominance makes children vulnerable to error
  • Tsuji, S., Gonzalez Gomez, N., Medina, V., Nazzi, T., & Mazuka, R. (2012). The labial–coronal effect revisited: Japanese adults say pata, but hear tapa. Cognition, 125, 413-428. doi:10.1016/j.cognition.2012.07.017.

    Abstract

    The labial–coronal effect has originally been described as a bias to initiate a word with a labial consonant–vowel–coronal consonant (LC) sequence. This bias has been explained with constraints on the human speech production system, and its perceptual correlates have motivated the suggestion of a perception–production link. However, previous studies exclusively considered languages in which LC sequences are globally more frequent than their counterpart. The current study examined the LC bias in speakers of Japanese, a language that has been claimed to possess more CL than LC sequences. We first conducted an analysis of Japanese corpora that qualified this claim, and identified a subgroup of consonants (plosives) exhibiting a CL bias. Second, focusing on this subgroup of consonants, we found diverging results for production and perception such that Japanese speakers exhibited an articulatory LC bias, but a perceptual CL bias. The CL perceptual bias, however, was modulated by language of presentation, and was only present for stimuli recorded by a Japanese, but not a French, speaker. A further experiment with native speakers of French showed the opposite effect, with an LC bias for French stimuli only. Overall, we find support for a universal, articulatory motivated LC bias in production, supporting a motor explanation of the LC effect, while perceptual biases are influenced by distributional frequencies of the native language.
  • Tuinman, A., Mitterer, H., & Cutler, A. (2012). Resolving ambiguity in familiar and unfamiliar casual speech. Journal of Memory and Language, 66, 530-544. doi:10.1016/j.jml.2012.02.001.

    Abstract

    In British English, the phrase Canada aided can sound like Canada raided if the speaker
    links the two vowels at the word boundary with an intrusive /r/. There are subtle phonetic
    differences between an onset /r/ and an intrusive /r/, however. With cross-modal priming
    and eye-tracking, we examine how native British English listeners and non-native
    (Dutch) listeners deal with the lexical ambiguity arising from this language-specific
    connected speech process. Together the results indicate that the presence of /r/ initially
    activates competing words for both listener groups; however, the native listeners rapidly
    exploit the phonetic cues and achieve correct lexical selection. In contrast, these
    advanced L2 listeners to English failed to recover from the /r/-induced competition, and
    failed to match native performance in either task. The /r/-intrusion process, which adds a
    phoneme to speech input, thus causes greater difficulty for L2 listeners than connectedspeech
    processes which alter or delete phonemes.
  • Turco, G., & Gubian, M. (2012). L1 Prosodic transfer and priming effects: A quantitative study on semi-spontaneous dialogues. In Q. Ma, H. Ding, & D. Hirst (Eds.), Proceedings of the 6th International Conference on Speech Prosody (pp. 386-389). International Speech Communication Association (ISCA).

    Abstract

    This paper represents a pilot investigation of primed accentuation patterns produced by advanced Dutch speakers of Italian as a second language (L2). Contrastive accent patterns within prepositional phrases were elicited in a semispontaneous dialogue entertained with a confederate native speaker of Italian. The aim of the analysis was to compare learner’s contrastive accentual configurations induced by the confederate speaker’s prime against those produced by Italian and Dutch natives in the same testing conditions. F0 and speech rate data were analysed by applying powerful datadriven techniques available in the Functional Data Analysis statistical framework. Results reveal different accentual configurations in L1 and L2 Italian in response to the confederate’s prime. We conclude that learner’s accentual patterns mirror those ones produced by their L1 control group (prosodic-transfer hypothesis) although the hypothesis of a transient priming effect on learners’ choice of contrastive patterns cannot be completely ruled out.
  • Udden, J., & Bahlmann, J. (2012). A rostro-caudal gradient of structured sequence processing in the left inferior frontal gyrus [Review article]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 2023-2032. doi:10.1098/rstb.2012.0009.

    Abstract

    In this paper, we present two novel perspectives on the function of the left inferior frontal gyrus (LIFG). First, a structured sequence processing perspective facilitates the search for functional segregation within the LIFG and provides a way to express common aspects across cognitive domains including language, music and action. Converging evidence from functional magnetic resonance imaging and transcranial magnetic stimulation studies suggests that the LIFG is engaged in sequential processing in artificial grammar learning, independently of particular stimulus features of the elements (whether letters, syllables or shapes are used to build up sequences). The LIFG has been repeatedly linked to processing of artificial grammars across all different grammars tested, whether they include non-adjacent dependencies or mere adjacent dependencies. Second, we apply the sequence processing perspective to understand how the functional segregation of semantics, syntax and phonology in the LIFG can be integrated in the general organization of the lateral prefrontal cortex (PFC). Recently, it was proposed that the functional organization of the lateral PFC follows a rostro-caudal gradient, such that more abstract processing in cognitive control is subserved by more rostral regions of the lateral PFC. We explore the literature from the viewpoint that functional segregation within the LIFG can be embedded in a general rostro-caudal abstraction gradient in the lateral PFC. If the lateral PFC follows a rostro-caudal abstraction gradient, then this predicts that the LIFG follows the same principles, but this prediction has not yet been tested or explored in the LIFG literature. Integration might provide further insights into the functional architecture of the LIFG and the lateral PFC.
  • Udden, J., Ingvar, M., Hagoort, P., & Petersson, K. M. (2012). Implicit acquisition of grammars with crossed and nested non-adjacent dependencies: Investigating the push-down stack model. Cognitive Science, 36, 1078-1101. doi:10.1111/j.1551-6709.2012.01235.x.

    Abstract

    A recent hypothesis in empirical brain research on language is that the fundamental difference between animal and human communication systems is captured by the distinction between finite-state and more complex phrase-structure grammars, such as context-free and context-sensitive grammars. However, the relevance of this distinction for the study of language as a neurobiological system has been questioned and it has been suggested that a more relevant and partly analogous distinction is that between non-adjacent and adjacent dependencies. Online memory resources are central to the processing of non-adjacent dependencies as information has to be maintained across intervening material. One proposal is that an external memory device in the form of a limited push-down stack is used to process non-adjacent dependencies. We tested this hypothesis in an artificial grammar learning paradigm where subjects acquired non-adjacent dependencies implicitly. Generally, we found no qualitative differences between the acquisition of non-adjacent dependencies and adjacent dependencies. This suggests that although the acquisition of non-adjacent dependencies requires more exposure to the acquisition material, it utilizes the same mechanisms used for acquiring adjacent dependencies. We challenge the push-down stack model further by testing its processing predictions for nested and crossed multiple non-adjacent dependencies. The push-down stack model is partly supported by the results, and we suggest that stack-like properties are some among many natural properties characterizing the underlying neurophysiological mechanisms that implement the online memory resources used in language and structured sequence processing.
  • Udden, J. (2012). Language as structured sequences: a causal role of Broca's region in sequence processing. PhD Thesis, Karolinska Institutet, Stockholm.

    Abstract

    In this thesis I approach language as a neurobiological system. I defend a sequence processing perspective on language and on the function of Broca's region in the left inferior frontal gyrus (LIFG). This perspective provides a way to express common structural aspects of language, music and action, which all engage the LIFG. It also facilitates the comparison of human language and structured sequence processing in animals. Research on infants, song-birds and non-human primates suggests an interesting role for non-adjacent dependencies in language acquisition and the evolution of language. In a series of experimental studies using a sequence processing paradigm called artificial grammar learning (AGL), we have investigated sequences with adjacent and non-adjacent dependencies. Our behavioral and transcranial magnetic stimulation (TMS) studies show that healthy subjects successfully discriminate between grammatical and non-grammatical sequences after having acquired aspects of a grammar with nested or crossed non-adjacent dependencies implicitly. There were no indications of separate acquisition/processing mechanisms for sequence processing of adjacent and non-adjacent dependencies, although acquisition of non-adjacent dependencies takes more time. In addition, we studied the causal role of Broca‟s region in processing artificial syntax. Although syntactic processing has already been robustly correlated with activity in Broca's region, the causal role of Broca's region in syntactic processing, in particular syntactic comprehension has been unclear. Previous lesion studies have shown that a lesion in Broca's region is neither a necessary nor sufficient condition to induce e.g. syntactic deficits. Subsequent to transcranial magnetic stimulation of Broca‟s region, discrimination of grammatical sequences with non-adjacent dependencies from non-grammatical sequences was impaired, compared to when a language irrelevant control region (vertex) was stimulated. Two additional experiments show perturbation of discrimination performance for grammars with adjacent dependencies after stimulation of Broca's region. Together, these results support the view that Broca‟s region plays a causal role in implicit structured sequence processing.
  • Udden, J., Hulten, A., Bendt, K., Mineroff, Z., Kucera, K. S., Vino, A., Fedorenko, E., Hagoort, P., & Fisher, S. E. (2019). Towards robust functional neuroimaging genetics of cognition. Journal of Neuroscience, 39(44), 8778-8787. doi:10.1523/JNEUROSCI.0888-19.2019.

    Abstract

    A commonly held assumption in cognitive neuroscience is that, because measures of human brain function are closer to underlying biology than distal indices of behavior/cognition, they hold more promise for uncovering genetic pathways. Supporting this view is an influential fMRI-based study of sentence reading/listening by Pinel et al. (2012), who reported that common DNA variants in specific candidate genes were associated with altered neural activation in language-related regions of healthy individuals that carried them. In particular, different single-nucleotide polymorphisms (SNPs) of FOXP2 correlated with variation in task-based activation in left inferior frontal and precentral gyri, whereas a SNP at the KIAA0319/TTRAP/THEM2 locus was associated with variable functional asymmetry of the superior temporal sulcus. Here, we directly test each claim using a closely matched neuroimaging genetics approach in independent cohorts comprising 427 participants, four times larger than the original study of 94 participants. Despite demonstrating power to detect associations with substantially smaller effect sizes than those of the original report, we do not replicate any of the reported associations. Moreover, formal Bayesian analyses reveal substantial to strong evidence in support of the null hypothesis (no effect). We highlight key aspects of the original investigation, common to functional neuroimaging genetics studies, which could have yielded elevated false-positive rates. Genetic accounts of individual differences in cognitive functional neuroimaging are likely to be as complex as behavioral/cognitive tests, involving many common genetic variants, each of tiny effect. Reliable identification of true biological signals requires large sample sizes, power calculations, and validation in independent cohorts with equivalent paradigms.

    SIGNIFICANCE STATEMENT A pervasive idea in neuroscience is that neuroimaging-based measures of brain function, being closer to underlying neurobiology, are more amenable for uncovering links to genetics. This is a core assumption of prominent studies that associate common DNA variants with altered activations in task-based fMRI, despite using samples (10–100 people) that lack power for detecting the tiny effect sizes typical of genetically complex traits. Here, we test central findings from one of the most influential prior studies. Using matching paradigms and substantially larger samples, coupled to power calculations and formal Bayesian statistics, our data strongly refute the original findings. We demonstrate that neuroimaging genetics with task-based fMRI should be subject to the same rigorous standards as studies of other complex traits.
  • Ullman, M. T., Bulut, T., & Walenski, M. (2024). Hijacking limitations of working memory load to test for composition in language. Cognition, 251: 105875. doi:10.1016/j.cognition.2024.105875.

    Abstract

    Although language depends on storage and composition, just what is stored or (de)composed remains unclear. We leveraged working memory load limitations to test for composition, hypothesizing that decomposed forms should particularly tax working memory. We focused on a well-studied paradigm, English inflectional morphology. We predicted that (compositional) regulars should be harder to maintain in working memory than (non-compositional) irregulars, using a 3-back production task. Frequency, phonology, orthography, and other potentially confounding factors were controlled for. Compared to irregulars, regulars and their accompanying −s/−ing-affixed filler items yielded more errors. Underscoring the decomposition of only regulars, regulars yielded more bare-stem (e.g., walk) and stem affixation errors (walks/walking) than irregulars, whereas irregulars yielded more past-tense-form affixation errors (broughts/tolded). In line with previous evidence that regulars can be stored under certain conditions, the regular-irregular difference held specifically for phonologically consistent (not inconsistent) regulars, in particular for both low and high frequency consistent regulars in males, but only for low frequency consistent regulars in females. Sensitivity analyses suggested the findings were robust. The study further elucidates the computation of inflected forms, and introduces a simple diagnostic for linguistic composition.

    Additional information

    Data availabillity
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 432-436).

    Abstract

    The human brain deals with the infinite variability of speech through multiple mechanisms. Some of them rely solely on information in the speech input (i.e., signal-driven) whereas some rely on linguistic or real-world knowledge (i.e., knowledge-driven). Many signal-driven perceptual processes rely on the enhancement of acoustic differences between incoming speech sounds, producing contrastive adjustments. For instance, when an ambiguous voiceless fricative is preceded by a high fundamental frequency (f0) sentence, the fricative is perceived as having lower a spectral center of gravity (CoG). However, it is not clear whether knowledge of a talker’s typical f0 can lead to similar contrastive effects. This study investigated a possible talker f0 effect on fricative CoG perception. In the exposure phase, two groups of participants (N=16 each) heard the same talker at high or low f0 for 20 minutes. Later, in the test phase, participants rated fixed-f0 /?ɔk/ tokens as being /sɔk/ (i.e., high CoG) or /ʃɔk/ (i.e., low CoG), where /?/ represents a fricative from a 5-step /s/-/ʃ/ continuum. Surprisingly, the data revealed the opposite of our contrastive hypothesis, whereby hearing high f0 instead biased perception towards high CoG. Thus, we demonstrated that talker f0 information affects fricative CoG perception.
  • Urrutia, M., de Vega, M., & Bastiaansen, M. C. M. (2012). Understanding counterfactuals in discourse modulates ERP and oscillatory gamma rhythms in the EEG. Brain Research, 1455, 40-55. doi:10.1016/j.brainres.2012.03.032.

    Abstract

    This study provides ERP and oscillatory dynamics data associated with the comprehension of narratives involving counterfactual events. Participants were given short stories describing an initial situation (“Marta wanted to plant flowers in her garden…”), followed by a critical sentence describing a new situation in either a factual (“Since she found a spade, she started to dig a hole”) or counterfactual format (“If she had found a spade, she would have started to dig a hole”), and then a continuation sentence that was either related to the initial situation (“she bought a spade”) or to the new one (“she planted roses”). The ERPs recorded for the continuation sentences related to the initial situation showed larger negativity after factuals than after counterfactuals, suggesting that the counterfactual's presupposition – the events did not occur – prevents updating the here-and-now of discourse. By contrast, continuation sentences related to the new situation elicited similar ERPs under both factual and counterfactual contexts, suggesting that counterfactuals also activate momentarily an alternative “as if” meaning. However, the reduction of gamma power following counterfactuals, suggests that the “as if” meaning is not integrated into the discourse, nor does it contribute to semantic unification processes.
  • Uzbas, F., Sezerman, U., Hartl, L., Kubicek, C. P., & Seiboth, B. (2012). A homologous production system for Trichoderma reesei secreted proteins in a cellulase-free background. Applied Microbiology and Biotechnology, 93, 1601-1608. doi:10.1007/s00253-011-3674-8.

    Abstract

    Recent demands for the production of biofuels from lignocellulose led to an increased interest in engineered cellulases from Trichoderma reesei or other fungal sources. While the methods to generate such mutant cellulases on DNA level are straightforward, there is often a bottleneck in their production since a correct posttranslational processing of these enzymes is needed to obtain highly active enzymes. Their production and subsequent enzymatic analysis in the homologous host T. reesei is, however, often disturbed by the concomitant production of other endogenous cellulases. As a useful alternative, we tested the production of cellulases in T. reesei in a genetic background where cellulase formation has been impaired by deletion of the major cellulase transcriptional activator gene xyr1. Three cellulase genes (cel7a, cel7b, and cel12a) were expressed under the promoter regions of the two highly expressed genes tef1 (encoding translation elongation factor 1-alpha) or cdna1 (encoding the hypothetical protein Trire2:110879). When cultivated on d-glucose as carbon source, the Δxyr1 strain secreted all three cellulases into the medium. Related to the introduced gene copy number, the cdna1 promoter appeared to be superior to the tef1 promoter. No signs of proteolysis were detected, and the individual cellulases could be assayed over a background essentially free of other cellulases. Hence this system can be used as a vehicle for rapid and high-throughput testing of cellulase muteins in a homologous background.
  • Uzbas, F., Opperer, F., Sönmezer, C., Shaposhnikov, D., Sass, S., Krendl, C., Angerer, P., Theis, F. J., Mueller, N. S., & Drukker, M. (2019). BART-Seq: Cost-effective massively parallelized targeted sequencing for genomics, transcriptomics, and single-cell analysis. Genome Biology, 20: 155. doi:10.1186/s13059-019-1748-6.

    Abstract

    We describe a highly sensitive, quantitative, and inexpensive technique for targeted sequencing of transcript cohorts or genomic regions from thousands of bulk samples or single cells in parallel. Multiplexing is based on a simple method that produces extensive matrices of diverse DNA barcodes attached to invariant primer sets, which are all pre-selected and optimized in silico. By applying the matrices in a novel workflow named Barcode Assembly foR Targeted Sequencing (BART-Seq), we analyze developmental states of thousands of single human pluripotent stem cells, either in different maintenance media or upon Wnt/β-catenin pathway activation, which identifies the mechanisms of differentiation induction. Moreover, we apply BART-Seq to the genetic screening of breast cancer patients and identify BRCA mutations with very high precision. The processing of thousands of samples and dynamic range measurements that outperform global transcriptomics techniques makes BART-Seq first targeted sequencing technique suitable for numerous research applications.

    Additional information

    additional files
  • van der Burght, C. L., Goucha, T., Friederici, A. D., Kreitewolf, J., & Hartwigsen, G. (2019). Intonation guides sentence processing in the left inferior frontal gyrus. Cortex, 117, 122-134. doi:10.1016/j.cortex.2019.02.011.

    Abstract

    Speech prosody, the variation in sentence melody and rhythm, plays a crucial role in sentence comprehension. Specifically, changes in intonational pitch along a sentence can affect our understanding of who did what to whom. To date, it remains unclear how the brain processes this particular use of intonation and which brain regions are involved. In particular, one central matter of debate concerns the lateralisation of intonation processing. To study the role of intonation in sentence comprehension, we designed a functional MRI experiment in which participants listened to spoken sentences. Critically, the interpretation of these sentences depended on either intonational or grammatical cues. Our results
    showed stronger functional activity in the left inferior frontal gyrus (IFG) when the intonational cue was crucial for sentence comprehension compared to when it was not. When instead a grammatical cue was crucial for sentence comprehension, we found involvement of an overlapping region in the left IFG, as well as in a posterior temporal
    region. A further analysis revealed that the lateralisation of intonation processing depends on its role in syntactic processing: activity in the IFG was lateralised to the left hemisphere when intonation was the only source of information to comprehend the sentence. In contrast, activity in the IFG was right-lateralised when intonation did not contribute to sentence comprehension. Together, these results emphasise the key role of the left IFG in sentence comprehension, showing the importance of this region when intonation
    establishes sentence structure. Furthermore, our results provide evidence for the theory
    that the lateralisation of prosodic processing is modulated by its linguistic role.
  • Van Dooren, A., Tulling, M., Cournane, A., & Hacquard, V. (2019). Discovering modal polysemy: Lexical aspect might help. In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 203-216). Sommerville, MA: Cascadilla Press.
  • Van Leeuwen, T. M., Van Petersen, E., Burghoorn, F., Dingemanse, M., & Van Lier, R. (2019). Autistic traits in synaesthesia: Atypical sensory sensitivity and enhanced perception of details. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190024. doi:10.1098/rstb.2019.0024.

    Abstract

    In synaesthetes specific sensory stimuli (e.g., black letters) elicit additional experiences (e.g. colour). Synaesthesia is highly prevalent among individuals with autism spectrum disorder but the mechanisms of this co-occurrence are not clear. We hypothesized autism and synaesthesia share atypical sensory sensitivity and perception. We assessed autistic traits, sensory sensitivity, and visual perception in two synaesthete populations. In Study 1, synaesthetes (N=79, of different types) scored higher than non-synaesthetes (N=76) on the Attention-to-detail and Social skills subscales of the Autism Spectrum Quotient indexing autistic traits, and on the Glasgow Sensory Questionnaire indexing sensory hypersensitivity and hyposensitivity which frequently occur in autism. Synaesthetes performed two local/global visual tasks because individuals with autism typically show a bias toward detail processing. In synaesthetes, elevated motion coherence thresholds suggested reduced global motion perception and higher accuracy on an embedded figures task suggested enhanced local perception. In Study 2 sequence-space synaesthetes (N=18) completed the same tasks. Questionnaire and embedded figures results qualitatively resembled Study 1 results but no significant group differences with non-synaesthetes (N=20) were obtained. Unexpectedly, sequence-space synaesthetes had reduced motion coherence thresholds. Altogether, our studies suggest atypical sensory sensitivity and a bias towards detail processing are shared features of synaesthesia and autism spectrum disorder.
  • Van Paridon, J., Roelofs, A., & Meyer, A. S. (2019). A lexical bottleneck in shadowing and translating of narratives. Language, Cognition and Neuroscience, 34(6), 803-812. doi:10.1080/23273798.2019.1591470.

    Abstract

    In simultaneous interpreting, speech comprehension and production processes have to be coordinated in close temporal proximity. To examine the coordination, Dutch-English bilingual participants were presented with narrative fragments recorded in English at speech rates varying from 100 to 200 words per minute and they were asked to translate the fragments into Dutch (interpreting) or repeat them in English (shadowing). Interpreting yielded more errors than shadowing at every speech rate, and increasing speech rate had a stronger negative effect on interpreting than on shadowing. To understand the differential effect of speech rate, a computational model was created of sub-lexical and lexical processes in comprehension and production. Computer simulations revealed that the empirical findings could be captured by assuming a bottleneck preventing simultaneous lexical selection in production and comprehension. To conclude, our empirical and modelling results suggest the existence of a lexical bottleneck that limits the translation of narratives at high speed.

    Additional information

    plcp_a_1591470_sm5183.docx
  • Van den Bos, E., & Poletiek, F. H. (2019). Correction to: Effects of grammar complexity on artificial grammar learning (vol 36, pg 1122, 2008). Memory & Cognition, 47(8), 1619-1620. doi:10.3758/s13421-019-00946-0.
  • Van Berkum, J. J. A. (1986). De cognitieve psychologie op zoek naar grondslagen. Kennis en Methode: Tijdschrift voor wetenschapsfilosofie en methodologie, X, 348-360.
  • Van Valin Jr., R. D., & Guerrero, L. (2012). De sujetos, pivotes y controladores: El argumento sintácticamente privilegiado. In R. Marial, L. Guerrero, & C. González Vergara (Eds.), El funcionalismo en la teoría lingüística: La gramática del papel y la referencia (pp. 247-267). Madrid: Akal.

    Abstract

    Translated and expanded version of 'Privileged syntactic arguments, pivots and controllers
  • Van Berkum, J. J. A. (1986). Doordacht gevoel: Emoties als informatieverwerking. De Psycholoog, 21(9), 417-423.
  • Van den Broek, G. S. E., Segers, E., Van Rijn, H., Takashima, A., & Verhoeven, L. (2019). Effects of elaborate feedback during practice tests: Costs and benefits of retrieval prompts. Journal of Experimental Psychology: Applied, 25(4), 588-601. doi:10.1037/xap0000212.

    Abstract

    This study explores the effect of feedback with hints on students’ recall of words. In three classroom experiments, high school students individually practiced vocabulary words through computerized retrieval practice with either standard show-answer feedback (display of answer) or hints feedback after incorrect responses. Hints feedback gave students a second chance to find the correct response using orthographic (Experiment 1), mnemonic (Experiment 2), or cross-language hints (Experiment 3). During practice, hints led to a shift of practice time from further repetitions to longer feedback processing but did not reduce (repeated) errors. There was no effect of feedback on later recall except when the hints from practice were also available on the test, indicating limited transfer of practice with hints to later recall without hints (in Experiments 1 and 2). Overall, hints feedback was not preferable over show-answer feedback. The common notion that hints are beneficial may not hold when the total practice time is limited.
  • Van den Brink, D., Van Berkum, J. J. A., Bastiaansen, M. C. M., Tesink, C. M. J. Y., Kos, M., Buitelaar, J. K., & Hagoort, P. (2012). Empathy matters: ERP evidence for inter-individual differences in social language processing. Social, Cognitive and Affective Neuroscience, 7, 173-182. doi:10.1093/scan/nsq094.

    Abstract

    When an adult claims he cannot sleep without his teddy bear, people tend to react surprised. Language interpretation is, thus, influenced by social context, such as who the speaker is. The present study reveals inter-individual differences in brain reactivity to social aspects of language. Whereas women showed brain reactivity when stereotype-based inferences about a speaker conflicted with the content of the message, men did not. This sex difference in social information processing can be explained by a specific cognitive trait, one’s ability to empathize. Individuals who empathize to a greater degree revealed larger N400 effects (as well as a larger increase in γ-band power) to socially relevant information. These results indicate that individuals with high-empathizing skills are able to rapidly integrate information about the speaker with the content of the message, as they make use of voice-based inferences about the speaker to process language in a top-down manner. Alternatively, individuals with lower empathizing skills did not use information about social stereotypes in implicit sentence comprehension, but rather took a more bottom-up approach to the processing of these social pragmatic sentences.
  • Van Berkum, J. J. A., & Nieuwland, M. S. (2019). A cognitive neuroscience perspective on language comprehension in context. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 429-442). Cambridge, MA: MIT Press.
  • Van Donselaar, W., Koster, M., & Cutler, A. (2005). Exploring the role of lexical stress in lexical recognition. Quarterly Journal of Experimental Psychology, 58A(2), 251-273. doi:10.1080/02724980343000927.

    Abstract

    Three cross-modal priming experiments examined the role of suprasegmental information in the processing of spoken words. All primes consisted of truncated spoken Dutch words. Recognition of visually presented word targets was facilitated by prior auditory presentation of the first two syllables of the same words as primes, but only if they were appropriately stressed (e.g., OKTOBER preceded by okTO-); inappropriate stress, compatible with another word (e.g., OKTOBER preceded by OCto-, the beginning of octopus), produced inhibition. Monosyllabic fragments (e.g., OC-) also produced facilitation when appropriately stressed; if inappropriately stressed, they produced neither facilitation nor inhibition. The bisyllabic fragments that were compatible with only one word produced facilitation to semantically associated words, but inappropriate stress caused no inhibition of associates. The results are explained within a model of spoken-word recognition involving competition between simultaneously activated phonological representations followed by activation of separate conceptual representations for strongly supported lexical candidates; at the level of the phonological representations, activation is modulated by both segmental and suprasegmental information.
  • Van Berkum, J. J. A., Brown, C. M., Zwitserlood, P., Kooijman, V., & Hagoort, P. (2005). Anticipating upcoming words in discourse: Evidence from ERPs and reading times. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(3), 443-467. doi:10.1037/0278-7393.31.3.443.

    Abstract

    The authors examined whether people can use their knowledge of the wider discourse rapidly enough to anticipate specific upcoming words as a sentence is unfolding. In an event-related brain potential (ERP) experiment, subjects heard Dutch stories that supported the prediction of a specific noun. To probe whether this noun was anticipated at a preceding indefinite article, stories were continued with a gender-marked adjective whose suffix mismatched the upcoming noun's syntactic gender. Prediction-inconsistent adjectives elicited a differential ERP effect, which disappeared in a no-discourse control experiment. Furthermore, in self-paced reading, prediction-inconsistent adjectives slowed readers down before the noun. These findings suggest that people can indeed predict upcoming words in fluent discourse and, moreover, that these predicted words can immediately begin to participate in incremental parsing operations.
  • Van Halteren, H., Baayen, R. H., Tweedie, F., Haverkort, M., & Neijt, A. (2005). New machine learning methods demonstrate the existence of a human stylome. Journal of Quantitative Linguistics, 12(1), 65-77. doi:10.1080/09296170500055350.

    Abstract

    Earlier research has shown that established authors can be distinguished by measuring specific properties of their writings, their stylome as it were. Here, we examine writings of less experienced authors. We succeed in distinguishing between these authors with a very high probability, which implies that a stylome exists even in the general population. However, the number of traits needed for so successful a distinction is an order of magnitude larger than assumed so far. Furthermore, traits referring to syntactic patterns prove less distinctive than traits referring to vocabulary, but much more distinctive than expected on the basis of current generativist theories of language learning.
  • Van Wijk, C., & Kempen, G. (1980). Functiewoorden: Een inventarisatie voor het Nederlands. ITL: Review of Applied Linguistics, 53-68.
  • Van Leeuwen, E. J. C., Cronin, K. A., Haun, D. B. M., Mundry, R., & Bodamer, M. D. (2012). Neighbouring chimpanzee communities show different preferences in social grooming behaviour. Proceedings of the Royal Society B: Biological Sciences, 279, 4362-4367. doi:10.1098/rspb.2012.1543.

    Abstract

    Grooming handclasp (GHC) behaviour was originally advocated as the first evidence of social culture in chimpanzees owing to the finding that some populations engaged in the behaviour and others do not. To date, however, the validity of this claim and the extent to which this social behaviour varies between groups is unclear. Here, we measured (i) variation, (ii) durability and (iii) expansion of the GHC behaviour in four chimpanzee communities that do not systematically differ in their genetic backgrounds and live in similar ecological environments. Ninety chimpanzees were studied for a total of 1029 h; 1394 GHC bouts were observed between 2010 and 2012. Critically, GHC style (defined by points of bodily contact) could be systematically linked to the chimpanzee’s group identity, showed temporal consistency both withinand between-groups, and could not be accounted for by the arm-length differential between partners. GHC has been part of the behavioural repertoire of the chimpanzees under study for more than 9 years (surpassing durability criterion) and spread across generations (surpassing expansion criterion). These results strongly indicate that chimpanzees’ social behaviour is not only motivated by innate predispositions and individual inclinations, but may also be partly cultural in nature.
  • Van Valin Jr., R. D. (2005). Exploring the syntax-semantics interface. Cambridge University Press.

    Abstract

    Language is a system of communication in which grammatical structures function to express meaning in context. While all languages can achieve the same basic communicative ends, they each use different means to achieve them, particularly in the divergent ways that syntax, semantics and pragmatics interact across languages. This book looks in detail at how structure, meaning, and communicative function interact in human languages. Working within the framework of Role and Reference Grammar (RRG), Van Valin proposes a set of rules, called the ‘linking algorithm’, which relates syntactic and semantic representations to each other, with discourse-pragmatics playing a role in the linking. Using this model, he discusses the full range of grammatical phenomena, including the structures of simple and complex sentences, verb and argument structure, voice, reflexivization and extraction restrictions. Clearly written and comprehensive, this book will be welcomed by all those working on the interface between syntax, semantics and pragmatics.
  • Van Valin Jr., R. D. (1994). Extraction restrictions, competing theories and the argument from the poverty of the stimulus. In S. D. Lima, R. Corrigan, & G. K. Iverson (Eds.), The reality of linguistic rules (pp. 243-259). Amsterdam: Benjamins.
  • Van Bergen, G., Flecken, M., & Wu, R. (2019). Rapid target selection of object categories based on verbs: Implications for language-categorization interactions. Psychophysiology, 56(9): e13395. doi:10.1111/psyp.13395.

    Abstract

    Although much is known about how nouns facilitate object categorization, very little is known about how verbs (e.g., posture verbs such as stand or lie) facilitate object categorization. Native Dutch speakers are a unique population to investigate this issue with because the configurational categories distinguished by staan (to stand) and liggen (to lie) are inherent in everyday Dutch language. Using an ERP component (N2pc), four experiments demonstrate that selection of posture verb categories is rapid (between 220–320 ms). The effect was attenuated, though present, when removing the perceptual distinction between categories. A similar attenuated effect was obtained in native English speakers, where the category distinction is less familiar, and when category labels were implicit for native Dutch speakers. Our results are among the first to demonstrate that category search based on verbs can be rapid, although extensive linguistic experience and explicit labels may not be necessary to facilitate categorization in this case.

    Additional information

    psyp13395-sup-0001-appendixs1.pdf
  • Van Leeuwen, E. J. C., Cronin, K. A., & Haun, D. B. M. (2019). Reply to Farine and Aplin: Chimpanzees choose their association and interaction partners. Proceedings of the National Academy of Sciences of the United States of America, 116(34), 16676-16677. doi:10.1073/pnas.1905745116.

    Abstract

    Farine and Aplin (1) question the validity of our study reporting group-specific social dynamics in chimpanzees (2). As alternative to our approach, Farine and Aplin advance a “prenetwork permutation” methodology that tests against random assortment (3). We appreciate Farine and Aplin’s interest and applied their suggested approaches to our data. The new analyses revealed highly similar results to those of our initial approach. We further dispel Farine and Aplin’s critique by outlining its incompatibility to our study system, methodology, and analysis.First, when we apply the suggested prenetwork permutation to our proximity dataset, we again find significant population-level differences in association rates, while controlling for population size [as derived from Farine and Aplin’s script (4); original result, P < 0.0001; results including prenetwork permutation, P < 0.0001]. Furthermore, when we … ↵1To whom correspondence may be addressed. Email: ejcvanleeuwen{at}gmail.com.
  • Van Berkum, J. J. A. (2012). The electrophysiology of discourse and conversation. In M. J. Spivey, K. McRae, & M. F. Joanisse (Eds.), The Cambridge handbook of psycholinguistics (pp. 589-614). New York: Cambridge University Press.

    Abstract

    Introduction: What’s happening in the brains of two people having a conversation? One reasonable guess is that in the fMRI scanner we’d see most of their brains light up. Another is that their EEG will be a total mess, reflecting dozens of interacting neuronal systems. Conversation recruits all of the basic language systems reviewed in this book. It also heavily taxes cognitive systems more likely to be found in handbooks of memory, attention and control, or social cognition (Brownell & Friedman, 2001). With most conversations going beyond the single utterance, for instance, they place a heavy load on episodic memory, as well as on the systems that allow us to reallocate cognitive resources to meet the demands of a dynamically changing situation. Furthermore, conversation is a deeply social and collaborative enterprise (Clark, 1996; this volume), in which interlocutors have to keep track of each others state of mind and coordinate on such things as taking turns, establishing common ground, and the goals of the conversation.
  • Van den Boomen, C., Fahrenfort, J. J., Snijders, T. M., & Kemner, C. (2019). Slow segmentation of faces in Autism Spectrum Disorder. Neuropsychologia, 127, 1-8. doi:10.1016/j.neuropsychologia.2019.02.005.

    Abstract

    Atypical visual segmentation, affecting object perception, might contribute to face processing problems in Autism Spectrum Disorder (ASD). The current study investigated impairments in visual segmentation of faces in ASD. Thirty participants (ASD: 16; Control: 14) viewed texture-defined faces, houses, and homogeneous images, while electroencephalographic and behavioral responses were recorded. The ASD group showed slower face-segmentation related brain activity and longer segmentation reaction times than the control group, but no difference in house-segmentation related activity or behavioral performance. Furthermore, individual differences in face-segmentation but not house-segmentation correlated with score on the Autism Quotient. Segmentation is thus selectively impaired for faces in ASD, and relates to the degree of ASD traits. Face segmentation relates to recurrent connectivity from the fusiform face area (FFA) to the visual cortex. These findings thus suggest that atypical connectivity from the FFA might contribute to delayed face processing in ASD.

    Additional information

    Supplementary material
  • Van Valin Jr., R. D. (2012). Some issues in the linking between syntax and semantics in relative clauses. In B. Comrie, & Z. Estrada-Fernández (Eds.), Relative Clauses in languages of the Americas: A typological overview (pp. 47-64). Amsterdam: Benjamins.

    Abstract

    Relative clauses present an interesting challenge for theories of the syntaxsemantics interface, because one element functions simultaneously in the matrix and relative clauses. The exact nature of the challenge depends on whether the relative clause is externally-headed or internallyheaded. Standard analyses of relative clauses are grounded in the analysis of Englishtype externally-headed constructions involving a relative pronoun, e.g. The horse which the man bought was a good horse, despite its typological rarity, and such accounts typically involve movement rules, both overt and covert, and phonologically null elements. The analysis of internally-headed relative clauses often involves the positing of an abstract structure including a null external head, with covert movement of the internal head to that position. The purpose of this paper is to show that the essential features of both types of relative clause can be captured in a syntactic theory that eschews movement rules and phonologically null elements, Role and Reference Grammar. It will be argued that a single set of linking principles can handle the syntax-to-semantics linking for both types. Keywords: Externally-headed relative clauses; internally-headed relative clauses; Role and Reference Grammar; linking syntax and semantics
  • Van Es, M. W. J., & Schoffelen, J.-M. (2019). Stimulus-induced gamma power predicts the amplitude of the subsequent visual evoked response. NeuroImage, 186, 703-712. doi:10.1016/j.neuroimage.2018.11.029.

    Abstract

    The efficiency of neuronal information transfer in activated brain networks may affect behavioral performance.
    Gamma-band synchronization has been proposed to be a mechanism that facilitates neuronal processing of
    behaviorally relevant stimuli. In line with this, it has been shown that strong gamma-band activity in visual
    cortical areas leads to faster responses to a visual go cue. We investigated whether there are directly observable
    consequences of trial-by-trial fluctuations in non-invasively observed gamma-band activity on the neuronal
    response. Specifically, we hypothesized that the amplitude of the visual evoked response to a go cue can be
    predicted by gamma power in the visual system, in the window preceding the evoked response. Thirty-three
    human subjects (22 female) performed a visual speeded response task while their magnetoencephalogram
    (MEG) was recorded. The participants had to respond to a pattern reversal of a concentric moving grating. We
    estimated single trial stimulus-induced visual cortical gamma power, and correlated this with the estimated single
    trial amplitude of the most prominent event-related field (ERF) peak within the first 100 ms after the pattern
    reversal. In parieto-occipital cortical areas, the amplitude of the ERF correlated positively with gamma power, and
    correlated negatively with reaction times. No effects were observed for the alpha and beta frequency bands,
    despite clear stimulus onset induced modulation at those frequencies. These results support a mechanistic model,
    in which gamma-band synchronization enhances the neuronal gain to relevant visual input, thus leading to more
    efficient downstream processing and to faster responses.
  • Van Goch, M. M., Verhoeven, L., & McQueen, J. M. (2019). Success in learning similar-sounding words predicts vocabulary depth above and beyond vocabulary breadth. Journal of Child Language, 46(1), 184-197. doi:10.1017/S0305000918000338.

    Abstract

    In lexical development, the specificity of phonological representations is important. The ability to build phonologically specific lexical representations predicts the number of words a child knows (vocabulary breadth), but it is not clear if it also fosters how well words are known (vocabulary depth). Sixty-six children were studied in kindergarten (age 5;7) and first grade (age 6;8). The predictive value of the ability to learn phonologically similar new words, phoneme discrimination ability, and phonological awareness on vocabulary breadth and depth were assessed using hierarchical regression. Word learning explained unique variance in kindergarten and first-grade vocabulary depth, over the other phonological factors. It did not explain unique variance in vocabulary breadth. Furthermore, even after controlling for kindergarten vocabulary breadth, kindergarten word learning still explained unique variance in first-grade vocabulary depth. Skill in learning phonologically similar words appears to predict knowledge children have about what words mean.
  • Van Alphen, P. M., & Van Berkum, J. J. A. (2012). Semantic involvement of initial and final lexical embeddings during sense-making: The advantage of starting late. Frontiers in Psychology, 3, 190. doi:10.3389/fpsyg.2012.00190.

    Abstract

    During spoken language interpretation, listeners rapidly relate the meaning of each individual word to what has been said before. However, spoken words often contain spurious other words, like 'day' in 'daisy', or 'dean' in 'sardine'. Do listeners also relate the meaning of such unintended, spurious words to the prior context? We used ERPs to look for transient meaning-based N400 effects in sentences that were completely plausible at the level of words intended by the speaker, but contained an embedded word whose meaning clashed with the context. Although carrier words with an initial embedding ('day' in 'daisy') did not elicit an embedding-related N400 effect relative to matched control words without embedding, carrier words with a final embedding ('dean' in 'sardine') did elicit such an effect. Together with prior work from our lab and the results of a Shortlist B simulation, our findings suggest that listeners do semantically interpret embedded words, albeit not under all conditions. We explain the latter by assuming that the sense-making system adjusts its hypothesis for how to interpret the external input at every new syllable, in line with recent ideas of active sampling in perception.
  • Van Uytvanck, D., Stehouwer, H., & Lampen, L. (2012). Semantic metadata mapping in practice: The Virtual Language Observatory. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 1029-1034). European Language Resources Association (ELRA).

    Abstract

    In this paper we present the Virtual Language Observatory (VLO), a metadata-based portal for language resources. It is completely based on the Component Metadata (CMDI) and ISOcat standards. This approach allows for the use of heterogeneous metadata schemas while maintaining the semantic compatibility. We describe the metadata harvesting process, based on OAI-PMH, and the conversion from several formats (OLAC, IMDI and the CLARIN LRT inventory) to their CMDI counterpart profiles. Then we focus on some post-processing steps to polish the harvested records. Next, the ingestion of the CMDI files into the VLO facet browser is described. We also include an overview of the changes since the first version of the VLO, based on user feedback from the CLARIN community. Finally there is an overview of additional ideas and improvements for future versions of the VLO.
  • Van Ackeren, M. J., Casasanto, D., Bekkering, H., Hagoort, P., & Rueschemeyer, S.-A. (2012). Pragmatics in action: Indirect requests engage theory of mind areas and the cortical motor network. Journal of Cognitive Neuroscience, 24, 2237-2247. doi:10.1162/jocn_a_00274.

    Abstract

    Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word “grasp” elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59–70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416–423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical–semantic representations. Human Brain Mapping, doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance “It is hot here!” in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed.
  • Van de Ven, M., Ernestus, M., & Schreuder, R. (2012). Predicting acoustically reduced words in spontaneous speech: The role of semantic/syntactic and acoustic cues in context. Laboratory Phonology, 3, 455-481. doi:10.1515/lp-2012-0020.

    Abstract

    In spontaneous speech, words may be realised shorter than in formal speech (e.g., English yesterday may be pronounced like [jɛʃeɩ]). Previous research has shown that context is required to understand highly reduced pronunciation variants. We investigated the extent to which listeners can predict low predictability reduced words on the basis of the semantic/syntactic and acoustic cues in their context. In four experiments, participants were presented with either the preceding context or the preceding and following context of reduced words, and either heard these fragments of conversational speech, or read their orthographic transcriptions. Participants were asked to predict the missing reduced word on the basis of the context alone, choosing from four plausible options. Participants made use of acoustic cues in the context, although casual speech typically has a high speech rate, and acoustic cues are much more unclear than in careful speech. Moreover, they relied on semantic/syntactic cues. Whenever there was a conflict between acoustic and semantic/syntactic contextual cues, measured as the word's probability given the surrounding words, listeners relied more heavily on acoustic cues. Further, context appeared generally insufficient to predict the reduced words, underpinning the significance of the acoustic characteristics of the reduced words themselves.
  • Van Rhijn, J. R. (2019). The role of FoxP2 in striatal circuitry. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van Berkum, J. J. A. (2012). Zonder gevoel geen taal. Neerlandistiek.nl. Wetenschappelijk tijdschrift voor de Nederlandse taal- en letterkunde, 12(01).

    Abstract

    Geïllustreerde herpublicatie van de oratie, uitgesproken bij het aanvaarden van de leeropdracht Discourse, cognitie en communicatie op 30 september 2011 (Universiteit Utrecht). In tegenstelling tot de oorspronkelijke oratie-tekst bevat deze herpublicatie ook diverse illustraties en links. Daarnaast is er in twee aansluitende artikelen door vakgenoten op gereageerd (zie http://www.neerlandistiek.nl/12.01a/ en http://www.neerlandistiek.nl/12.01b/)
  • Van Herpt, C., Van der Meulen, M., & Redl, T. (2019). Voorbeeldzinnen kunnen het goede voorbeeld geven. Levende Talen Magazine, 106(4), 18-21.
  • Van Geert, E., Ding, R., & Wagemans, J. (2024). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts. Advance online publication. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • Van der Werff, J., Ravignani, A., & Jadoul, Y. (2024). thebeat: A Python package for working with rhythms and other temporal sequences. Behavior Research Methods, 56, 3725-3736. doi:10.3758/s13428-023-02334-8.

    Abstract

    thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study.
  • van der Burght, C. L., & Meyer, A. S. (2024). Interindividual variation in weighting prosodic and semantic cues during sentence comprehension – a partial replication of Van der Burght et al. (2021). In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 792-796). doi:10.21437/SpeechProsody.2024-160.

    Abstract

    Contrastive pitch accents can mark sentence elements occupying parallel roles. In “Mary kissed John, not Peter”, a pitch accent on Mary or John cues the implied syntactic role of Peter. Van der Burght, Friederici, Goucha, and Hartwigsen (2021) showed that listeners can build expectations concerning syntactic and semantic properties of upcoming words, derived from pitch accent information they heard previously. To further explore these expectations, we attempted a partial replication of the original German study in Dutch. In the experimental sentences “Yesterday, the police officer arrested the thief, not the inspector/murderer”, a pitch accent on subject or object cued the subject/object role of the ellipsis clause. Contrasting elements were additionally cued by the thematic role typicality of the nouns. Participants listened to sentences in which the ellipsis clause was omitted and selected the most plausible sentence-final noun (presented visually) via button press. Replicating the original study results, listeners based their sentence-final preference on the pitch accent information available in the sentence. However, as in the original study, individual differences between listeners were found, with some following prosodic information and others relying on a structural bias. The results complement the literature on ellipsis resolution and on interindividual variability in cue weighting.
  • Varma, S., Takashima, A., Fu, L., & Kessels, R. P. C. (2019). Mindwandering propensity modulates episodic memory consolidation. Aging Clinical and Experimental Research, 31(11), 1601-1607. doi:10.1007/s40520-019-01251-1.

    Abstract

    Research into strategies that can combat episodic memory decline in healthy older adults has gained widespread attention over the years. Evidence suggests that a short period of rest immediately after learning can enhance memory consolidation, as compared to engaging in cognitive tasks. However, a recent study in younger adults has shown that post-encoding engagement in a working memory task leads to the same degree of memory consolidation as from post-encoding rest. Here, we tested whether this finding can be extended to older adults. Using a delayed recognition test, we compared the memory consolidation of word–picture pairs learned prior to 9 min of rest or a 2-Back working memory task, and examined its relationship with executive functioning and mindwandering propensity. Our results show that (1) similar to younger adults, memory for the word–picture associations did not differ when encoding was followed by post-encoding rest or 2-Back task and (2) older adults with higher mindwandering propensity retained more word–picture associations encoded prior to rest relative to those encoded prior to the 2-Back task, whereas participants with lower mindwandering propensity had better memory performance for the pairs encoded prior to the 2-Back task. Overall, our results indicate that the degree of episodic memory consolidation during both active and passive post-encoding periods depends on individual mindwandering tendency.

    Additional information

    Supplementary material
  • Verdonschot, R. G., Tokimoto, S., & Miyaoka, Y. (2019). The fundamental phonological unit of Japanese word production: An EEG study using the picture-word interference paradigm. Journal of Neurolinguistics, 51, 184-193. doi:10.1016/j.jneuroling.2019.02.004.

    Abstract

    It has been shown that in Germanic languages (e.g. English, Dutch) phonemes are the primary (or proximate) planning units during the early stages of phonological encoding. Contrastingly, in Chinese and Japanese the phoneme does not seem to play an important role but rather the syllable (Chinese) and mora (Japanese) are essential. However, despite the lack of behavioral evidence, neurocorrelational studies in Chinese suggested that electrophysiological brain responses (i.e. preceding overt responses) may indicate some significance for the phoneme. We investigated this matter in Japanese and our data shows that unlike in Chinese (for which the literature shows mixed effects), in Japanese both the behavioral and neurocorrelational data indicate an important role only for the mora (and not the phoneme) during the early stages of phonological encoding.
  • Verdonschot, R. G., Middelburg, R., Lensink, S. E., & Schiller, N. O. (2012). Morphological priming survives a language switch. Cognition, 124(3), 343-349. doi:10.1016/j.cognition.2012.05.019.

    Abstract

    In a long-lag morphological priming experiment, Dutch (L1)-English (L2) bilinguals were asked to name pictures and read aloud words. A design using non-switch blocks, consisting solely of Dutch stimuli, and switch-blocks, consisting of Dutch primes and targets with intervening English trials, was administered. Target picture naming was facilitated by morphologically related primes in both non-switch and switch blocks with equal magnitude. These results contrast some assumptions of sustained reactive inhibition models. However, models that do not assume bilinguals having to reactively suppress all activation of the non-target language can account for these data. (C) 2012 Elsevier B.V. All rights reserved.
  • Verdonschot, R. G., Van der Wal, J., Lewis, A. G., Knudsen, B., Von Grebmer zu Wolfsthurn, S., Schiller, N. O., & Hagoort, P. (2024). Information structure in Makhuwa: Electrophysiological evidence for a universal processing account. Proceedings of the National Academy of Sciences of the United States of America, 121(30): e2315438121. doi:10.1073/pnas.2315438121.

    Abstract

    There is evidence from both behavior and brain activity that the way information is structured, through the use of focus, can up-regulate processing of focused constituents, likely to give prominence to the relevant aspects of the input. This is hypothesized to be universal, regardless of the different ways in which languages encode focus. In order to test this universalist hypothesis, we need to go beyond the more familiar linguistic strategies for marking focus, such as by means of intonation or specific syntactic structures (e.g., it-clefts). Therefore, in this study, we examine Makhuwa-Enahara, a Bantu language spoken in northern Mozambique, which uniquely marks focus through verbal conjugation. The participants were presented with sentences that consisted of either a semantically anomalous constituent or a semantically nonanomalous constituent. Moreover, focus on this particular constituent could be either present or absent. We observed a consistent pattern: Focused information generated a more negative N400 response than the same information in nonfocus position. This demonstrates that regardless of how focus is marked, its consequence seems to result in an upregulation of processing of information that is in focus.

    Additional information

    supplementary materials
  • Verga, L., & Kotz, S. A. (2019). Putting language back into ecological communication contexts. Language, Cognition and Neuroscience, 34(4), 536-544. doi:10.1080/23273798.2018.1506886.

    Abstract

    Language is a multi-faceted form of communication. It is not until recently though that language research moved on from simple stimuli and protocols toward a more ecologically valid approach, namely “shifting” from words and simple sentences to stories with varying degrees of contextual complexity. While much needed, the use of ecologically valid stimuli such as stories should also be explored in interactive rather than individualistic experimental settings leading the way to an interactive neuroscience of language. Indeed, mounting evidence suggests that cognitive processes and their underlying neural activity significantly differ between social and individual experiences. We aim at reviewing evidence, which indicates that the characteristics of linguistic and extra-linguistic contexts may significantly influence communication–including spoken language comprehension. In doing so, we provide evidence on the use of new paradigms and methodological advancements that may enable the study of complex language features in a truly interactive, ecological way.
  • Verga, L., & Kotz, S. A. (2019). Spatial attention underpins social word learning in the right fronto-parietal network. NeuroImage, 195, 165-173. doi:10.1016/j.neuroimage.2019.03.071.

    Abstract

    In a multi- and inter-cultural world, we daily encounter new words. Adult learners often rely on a situational context to learn and understand a new word's meaning. Here, we explored whether interactive learning facilitates word learning by directing the learner's attention to a correct new word referent when a situational context is non-informative. We predicted larger involvement of inferior parietal, frontal, and visual cortices involved in visuo-spatial attention during interactive learning. We scanned participants while they played a visual word learning game with and without a social partner. As hypothesized, interactive learning enhanced activity in the right Supramarginal Gyrus when the situational context provided little information. Activity in the right Inferior Frontal Gyrus during interactive learning correlated with post-scanning behavioral test scores, while these scores correlated with activity in the Fusiform Gyrus in the non-interactive group. These results indicate that attention is involved in interactive learning when the situational context is minimal and suggest that individual learning processes may be largely different from interactive ones. As such, they challenge the ecological validity of what we know about individual learning and advocate the exploration of interactive learning in naturalistic settings.
  • Verhagen, J. (2005). The role of the nonmodal auxiliary 'hebben' in Dutch as a second language. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 109-127.

    Abstract

    The acquisition of non-modal auxiliaries has been assumed to constitute an important step in the acquisition of finiteness in Germanic languages (cf. Jordens/Dimroth 2005, Jordens 2004, Becker 2005). This paper focuses onthe role of the auxiliary hebben (>to have<) in the acquisition of Dutch as a second language. More specifically, it investigates whether learners' production of hebben is related to their acquisition of two phenomena commonly associated with finiteness, i.e., topicalization and negation. Data are presented from 16 Turkish and 36 Moroccan learners of Dutch who participated in an experiment involving production and imitation tasks. The production data suggest that learners use topicalization and post-verbal negation only after they have learned to produce the auxiliary hebben. The results from the imitation task indicate, that learners are more sensitive to topicalization and post-verbal negation in sentences with hebben than in sentences with lexical verbs. Interestingly this holds also for learners that did not show productive command of hebben in the production tasks. Thus, in general, the results of the experiment provide support for the idea that non-modal auxiliaries are crucial in the acquisition of (certain properties of) finiteness.
  • Verhagen, J. (2005). The role of the nonmodal auxiliary 'hebben' in Dutch as a second language. Toegepaste Taalwetenschap in Artikelen, 73, 41-52.
  • Verhoef, E., Demontis, D., Burgess, S., Shapland, C. Y., Dale, P. S., Okbay, A., Neale, B. M., Faraone, S. V., iPSYCH-Broad-PGC ADHD Consortium, Stergiakouli, E., Davey Smith, G., Fisher, S. E., Borglum, A., & St Pourcain, B. (2019). Disentangling polygenic associations between Attention-Deficit/Hyperactivity Disorder, educational attainment, literacy and language. Translational Psychiatry, 9: 35. doi:10.1038/s41398-018-0324-2.

    Abstract

    Interpreting polygenic overlap between ADHD and both literacy-related and language-related impairments is challenging as genetic associations might be influenced by indirectly shared genetic factors. Here, we investigate genetic overlap between polygenic ADHD risk and multiple literacy-related and/or language-related abilities (LRAs), as assessed in UK children (N ≤ 5919), accounting for genetically predictable educational attainment (EA). Genome-wide summary statistics on clinical ADHD and years of schooling were obtained from large consortia (N ≤ 326,041). Our findings show that ADHD-polygenic scores (ADHD-PGS) were inversely associated with LRAs in ALSPAC, most consistently with reading-related abilities, and explained ≤1.6% phenotypic variation. These polygenic links were then dissected into both ADHD effects shared with and independent of EA, using multivariable regressions (MVR). Conditional on EA, polygenic ADHD risk remained associated with multiple reading and/or spelling abilities, phonemic awareness and verbal intelligence, but not listening comprehension and non-word repetition. Using conservative ADHD-instruments (P-threshold < 5 × 10−8), this corresponded, for example, to a 0.35 SD decrease in pooled reading performance per log-odds in ADHD-liability (P = 9.2 × 10−5). Using subthreshold ADHD-instruments (P-threshold < 0.0015), these effects became smaller, with a 0.03 SD decrease per log-odds in ADHD risk (P = 1.4 × 10−6), although the predictive accuracy increased. However, polygenic ADHD-effects shared with EA were of equal strength and at least equal magnitude compared to those independent of EA, for all LRAs studied, and detectable using subthreshold instruments. Thus, ADHD-related polygenic links with LRAs are to a large extent due to shared genetic effects with EA, although there is evidence for an ADHD-specific association profile, independent of EA, that primarily involves literacy-related impairments.

    Additional information

    41398_2018_324_MOESM1_ESM.docx

Share this page