Publications

Displaying 501 - 600 of 601
  • Silva, S., Inácio, F., Rocha e Sousa, D., Gaspar, N., Folia, V., & Petersson, K. M. (2023). Formal language hierarchy reflects different levels of cognitive complexity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 49(4), 642-660. doi:10.1037/xlm0001182.

    Abstract

    Formal language hierarchy describes levels of increasing syntactic complexity (adjacent dependencies, nonadjacent nested, nonadjacent crossed) of which the transcription into a hierarchy of cognitive complexity remains under debate. The cognitive foundations of formal language hierarchy have been contradicted by two types of evidence: First, adjacent dependencies are not easier to learn compared to nonadjacent; second, crossed nonadjacent dependencies may be easier than nested. However, studies providing these findings may have engaged confounds: Repetition monitoring strategies may have accounted for participants’ high performance in nonadjacent dependencies, and linguistic experience may have accounted for the advantage of crossed dependencies. We conducted two artificial grammar learning experiments where we addressed these confounds by manipulating reliance on repetition monitoring and by testing participants inexperienced with crossed dependencies. Results showed relevant differences in learning adjacent versus nonadjacent dependencies and advantages of nested over crossed, suggesting that formal language hierarchy may indeed translate into a hierarchy of cognitive complexity
  • Silverstein, P., Bergmann, C., & Syed, M. (Eds.). (2024). Open science and metascience in developmental psychology [Special Issue]. Infant and Child Development, 33(1).
  • Silverstein, P., Bergmann, C., & Syed, M. (2024). Open science and metascience in developmental psychology: Introduction to the special issue. Infant and Child Development, 33(1): e2495. doi:10.1002/icd.2495.
  • Skirgård, H., Haynie, H. J., Blasi, D. E., Hammarström, H., Collins, J., Latarche, J. J., Lesage, J., Weber, T., Witzlack-Makarevich, A., Passmore, S., Chira, A., Maurits, L., Dinnage, R., Dunn, M., Reesink, G., Singer, R., Bowern, C., Epps, P. L., Hill, J., Vesakoski, O. and 85 moreSkirgård, H., Haynie, H. J., Blasi, D. E., Hammarström, H., Collins, J., Latarche, J. J., Lesage, J., Weber, T., Witzlack-Makarevich, A., Passmore, S., Chira, A., Maurits, L., Dinnage, R., Dunn, M., Reesink, G., Singer, R., Bowern, C., Epps, P. L., Hill, J., Vesakoski, O., Robbeets, M., Abbas, N. K., Auer, D., Bakker, N. A., Barbos, G., Borges, R. D., Danielsen, S., Dorenbusch, L., Dorn, E., Elliott, J., Falcone, G., Fischer, J., Ghanggo Ate, Y., Gibson, H., Göbel, H.-P., Goodall, J. A., Gruner, V., Harvey, A., Hayes, R., Heer, L., Herrera Miranda, R. E., Hübler, N., Huntington-Rainey, B. H., Ivani, J. K., Johns, M., Just, E., Kashima, E., Kipf, C., Klingenberg, J. V., König, N., Koti, A., Kowalik, R. G. A., Krasnoukhova, O., Lindvall, N. L. M., Lorenzen, M., Lutzenberger, H., Martins, T. R., Mata German, C., Van der Meer, S., Montoya Samamé, J., Müller, M., Muradoglu, S., Neely, K., Nickel, J., Norvik, M., Oluoch, C. A., Peacock, J., Pearey, I. O., Peck, N., Petit, S., Pieper, S., Poblete, M., Prestipino, D., Raabe, L., Raja, A., Reimringer, J., Rey, S. C., Rizaew, J., Ruppert, E., Salmon, K. K., Sammet, J., Schembri, R., Schlabbach, L., Schmidt, F. W., Skilton, A., Smith, W. D., De Sousa, H., Sverredal, K., Valle, D., Vera, J., Voß, J., Witte, T., Wu, H., Yam, S., Ye, J., Yong, M., Yuditha, T., Zariquiey, R., Forkel, R., Evans, N., Levinson, S. C., Haspelmath, M., Greenhill, S. J., Atkinson, Q., & Gray, R. D. (2023). Grambank reveals the importance of genealogical constraints on linguistic diversity and highlights the impact of language loss. Science Advances, 9(16): eadg6175. doi:10.1126/sciadv.adg6175.

    Abstract

    While global patterns of human genetic diversity are increasingly well characterized, the diversity of human languages remains less systematically described. Here, we outline the Grambank database. With over 400,000 data points and 2400 languages, Grambank is the largest comparative grammatical database available. The comprehensiveness of Grambank allows us to quantify the relative effects of genealogical inheritance and geographic proximity on the structural diversity of the world’s languages, evaluate constraints on linguistic diversity, and identify the world’s most unusual languages. An analysis of the consequences of language loss reveals that the reduction in diversity will be strikingly uneven across the major linguistic regions of the world. Without sustained efforts to document and revitalize endangered languages, our linguistic window into human history, cognition, and culture will be seriously fragmented.
  • Slaats, S., Weissbart, H., Schoffelen, J.-M., Meyer, A. S., & Martin, A. E. (2023). Delta-band neural responses to individual words are modulated by sentence processing. The Journal of Neuroscience, 43(26), 4867-4883. doi:10.1523/JNEUROSCI.0964-22.2023.

    Abstract

    To understand language, we need to recognize words and combine them into phrases and sentences. During this process, responses to the words themselves are changed. In a step towards understanding how the brain builds sentence structure, the present study concerns the neural readout of this adaptation. We ask whether low-frequency neural readouts associated with words change as a function of being in a sentence. To this end, we analyzed an MEG dataset by Schoffelen et al. (2019) of 102 human participants (51 women) listening to sentences and word lists, the latter lacking any syntactic structure and combinatorial meaning. Using temporal response functions and a cumulative model-fitting approach, we disentangled delta- and theta-band responses to lexical information (word frequency), from responses to sensory- and distributional variables. The results suggest that delta-band responses to words are affected by sentence context in time and space, over and above entropy and surprisal. In both conditions, the word frequency response spanned left temporal and posterior frontal areas; however, the response appeared later in word lists than in sentences. In addition, sentence context determined whether inferior frontal areas were responsive to lexical information. In the theta band, the amplitude was larger in the word list condition around 100 milliseconds in right frontal areas. We conclude that low-frequency responses to words are changed by sentential context. The results of this study speak to how the neural representation of words is affected by structural context, and as such provide insight into how the brain instantiates compositionality in language.
  • Slim, M. S., & Hartsuiker, R. J. (2023). Moving visual world experiments online? A web-based replication of Dijkgraaf, Hartsuiker, and Duyck (2017) using PCIbex and WebGazer.js. Behavior Research Methods, 55, 3786-3804. doi:10.3758/s13428-022-01989-z.

    Abstract

    The visual world paradigm is one of the most influential paradigms to study real-time language processing. The present study tested whether visual world studies can be moved online, using PCIbex software (Zehr & Schwarz, 2018) and the WebGazer.js algorithm (Papoutsaki et al., 2016) to collect eye-movement data. Experiment 1 was a fixation task in which the participants looked at a fixation cross in multiple positions on the computer screen. Experiment 2 was a web-based replication of a visual world experiment by Dijkgraaf et al. (2017). Firstly, both experiments revealed that the spatial accuracy of the data allowed us to distinguish looks across the four quadrants of the computer screen. This suggest that the spatial resolution of WebGazer.js is fine-grained enough for most visual world experiments (which typically involve a two-by-two quadrant-based set-up of the visual display). Secondly, both experiments revealed a delay of roughly 300 ms in the time course of the eye movements, possibly caused by the internal processing speed of the browser or WebGazer.js. This delay can be problematic in studying questions that require a fine-grained temporal resolution and requires further investigation.
  • Slim, M. S., Lauwers, P., & Hartsuiker, R. J. (2023). How abstract are logical representations? The role of verb semantics in representing quantifier scope. Glossa Psycholinguistics, 2(1): 9. doi:10.5070/G6011175.

    Abstract

    Language comprehension involves the derivation of the meaning of sentences by combining the meanings of their parts. In some cases, this can lead to ambiguity. A sentence like Every hiker climbed a hill allows two logical representations: One that specifies that every hiker climbed a different hill and one that specifies that every hiker climbed the same hill. The interpretations of such sentences can be primed: Exposure to a particular reading increases the likelihood that the same reading will be assigned to a subsequent similar sentence. Feiman and Snedeker (2016) observed that such priming is not modulated by overlap of the verb between prime and target. This indicates that mental logical representations specify the compositional structure of the sentence meaning without conceptual meaning content. We conducted a close replication of Feiman and Snedeker’s experiment in Dutch and found no verb-independent priming. Moreover, a comparison with a previous, within-verb priming experiment showed an interaction, suggesting stronger verb-specific than abstract priming. A power analysis revealed that both Feiman and Snedeker’s experiment and our Experiment 1 were underpowered. Therefore, we replicated our Experiment 1, using the sample size guidelines provided by our power analysis. This experiment again showed that priming was stronger if a prime-target pair contained the same verb. Together, our experiments show that logical representation priming is enhanced if the prime and target sentence contain the same verb. This suggests that logical representations specify compositional structure and meaning features in an integrated manner.
  • Slobin, D. I., & Bowerman, M. (2007). Interfaces between linguistic typology and child language research. Linguistic Typology, 11(1), 213-226. doi:10.1515/LINGTY.2007.015.
  • Slonimska, A. (2024). The role of iconicity and simultaneity in efficient communication in the visual modality: Evidence from LIS (Italian Sign Language) [Dissertation Abstract]. Sign Language & Linguistics, 27(1), 116-124. doi:10.1075/sll.00084.slo.
  • Smits, R. (2000). Temporal distribution of information for human consonant recognition in VCV utterances. Journal of Phonetics, 28, 111-135. doi:10.006/jpho.2000.0107.

    Abstract

    The temporal distribution of perceptually relevant information for consonant recognition in British English VCVs is investigated. The information distribution in the vicinity of consonantal closure and release was measured by presenting initial and final portions, respectively, of naturally produced VCV utterances to listeners for categorization. A multidimensional scaling analysis of the results provided highly interpretable, four-dimensional geometrical representations of the confusion patterns in the categorization data. In addition, transmitted information as a function of truncation point was calculated for the features manner place and voicing. The effects of speaker, vowel context, stress, and distinctive feature on the resulting information distributions were tested statistically. It was found that, although all factors are significant, the location and spread of the distributions depends principally on the distinctive feature, i.e., the temporal distribution of perceptually relevant information is very different for the features manner, place, and voicing.
  • Snijders, T. M., Kooijman, V., Cutler, A., & Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Research, 1178, 106-113. doi:10.1016/j.brainres.2007.07.080.

    Abstract

    Previous studies have shown that segmentation skills are language-specific, making it difficult to segment continuous speech in an unfamiliar language into its component words. Here we present the first study capturing the delay in segmentation and recognition in the foreign listener using ERPs. We compared the ability of Dutch adults and of English adults without knowledge of Dutch (‘foreign listeners’) to segment familiarized words from continuous Dutch speech. We used the known effect of repetition on the event-related potential (ERP) as an index of recognition of words in continuous speech. Our results show that word repetitions in isolation are recognized with equivalent facility by native and foreign listeners, but word repetitions in continuous speech are not. First, words familiarized in isolation are recognized faster by native than by foreign listeners when they are repeated in continuous speech. Second, when words that have previously been heard only in a continuous-speech context re-occur in continuous speech, the repetition is detected by native listeners, but is not detected by foreign listeners. A preceding speech context facilitates word recognition for native listeners, but delays or even inhibits word recognition for foreign listeners. We propose that the apparent difference in segmentation rate between native and foreign listeners is grounded in the difference in language-specific skills available to the listeners.
  • Snijders Blok, L., Verseput, J., Rots, D., Venselaar, H., Innes, A. M., Stumpel, C., Õunap, K., Reinson, K., Seaby, E. G., McKee, S., Burton, B., Kim, K., Van Hagen, J. M., Waisfisz, Q., Joset, P., Steindl, K., Rauch, A., Li, D., Zackai, E. H., Sheppard, S. E. and 29 moreSnijders Blok, L., Verseput, J., Rots, D., Venselaar, H., Innes, A. M., Stumpel, C., Õunap, K., Reinson, K., Seaby, E. G., McKee, S., Burton, B., Kim, K., Van Hagen, J. M., Waisfisz, Q., Joset, P., Steindl, K., Rauch, A., Li, D., Zackai, E. H., Sheppard, S. E., Keena, B., Hakonarson, H., Roos, A., Kohlschmidt, N., Cereda, A., Iascone, M., Rebessi, E., Kernohan, K. D., Campeau, P. M., Millan, F., Taylor, J. A., Lochmüller, H., Higgs, M. R., Goula, A., Bernhard, B., Velasco, D. J., Schmanski, A. A., Stark, Z., Gallacher, L., Pais, L., Marcogliese, P. C., Yamamoto, S., Raun, N., Jakub, T. E., Kramer, J. M., Den Hoed, J., Fisher, S. E., Brunner, H. G., & Kleefstra, T. (2023). A clustering of heterozygous missense variants in the crucial chromatin modifier WDR5 defines a new neurodevelopmental disorder. Human Genetics and Genomics Advances, 4(1): 100157. doi:10.1016/j.xhgg.2022.100157.

    Abstract

    WDR5 is a broadly studied, highly conserved key protein involved in a wide array of biological functions. Among these functions, WDR5 is a part of several protein complexes that affect gene regulation via post-translational modification of histones. We collected data from 11 unrelated individuals with six different rare de novo germline missense variants in WDR5; one identical variant was found in five individuals, and another variant in two individuals. All individuals had neurodevelopmental disorders including speech/language delays (N=11), intellectual disability (N=9), epilepsy (N=7) and autism spectrum disorder (N=4). Additional phenotypic features included abnormal growth parameters (N=7), heart anomalies (N=2) and hearing loss (N=2). Three-dimensional protein structures indicate that all the residues affected by these variants are located at the surface of one side of the WDR5 protein. It is predicted that five out of the six amino acid substitutions disrupt interactions of WDR5 with RbBP5 and/or KMT2A/C, as part of the COMPASS (complex proteins associated with Set1) family complexes. Our experimental approaches in Drosophila melanogaster and human cell lines show normal protein expression, localization and protein-protein interactions for all tested variants. These results, together with the clustering of variants in a specific region of WDR5 and the absence of truncating variants so far, suggest that dominant-negative or gain-of-function mechanisms might be at play. All in all, we define a neurodevelopmental disorder associated with missense variants in WDR5 and a broad range of features. This finding highlights the important role of genes encoding COMPASS family proteins in neurodevelopmental disorders.
  • Snowdon, C. T., & Cronin, K. A. (2007). Cooperative breeders do cooperate. Behavioural Processes, 76, 138-141. doi:10.1016/j.beproc.2007.01.016.

    Abstract

    Bergmuller et al. (2007) make an important contribution to studies of cooperative breeding and provide a theoretical basis for linking the evolution of cooperative breeding with cooperative behavior.We have long been involved in empirical research on the only family of nonhuman primates to exhibit cooperative breeding, the Callitrichidae, which includes marmosets and tamarins, with studies in both field and captive contexts. In this paper we expand on three themes from Bergm¨uller et al. (2007) with empirical data. First we provide data in support of the importance of helpers and the specific benefits that helpers can gain in terms of fitness. Second, we suggest that mechanisms of rewarding helpers are more common and more effective in maintaining cooperative breeding than punishments. Third, we present a summary of our own research on cooperative behavior in cotton-top tamarins (Saguinus oedipus) where we find greater success in cooperative problem solving than has been reported for non-cooperatively breeding species.
  • Söderström, P., & Cutler, A. (2023). Early neuro-electric indication of lexical match in English spoken-word recognition. PLOS ONE, 18(5): e0285286. doi:10.1371/journal.pone.0285286.

    Abstract

    We investigated early electrophysiological responses to spoken English words embedded in neutral sentence frames, using a lexical decision paradigm. As words unfold in time, similar-sounding lexical items compete for recognition within 200 milliseconds after word onset. A small number of studies have previously investigated event-related potentials in this time window in English and French, with results differing in direction of effects as well as component scalp distribution. Investigations of spoken-word recognition in Swedish have reported an early left-frontally distributed event-related potential that increases in amplitude as a function of the probability of a successful lexical match as the word unfolds. Results from the present study indicate that the same process may occur in English: we propose that increased certainty of a ‘word’ response in a lexical decision task is reflected in the amplitude of an early left-anterior brain potential beginning around 150 milliseconds after word onset. This in turn is proposed to be connected to the probabilistically driven activation of possible upcoming word forms.

    Additional information

    The datasets are available here
  • Soheili-Nezhad, S., Sprooten, E., Tendolkar, I., & Medici, M. (2023). Exploring the genetic link between thyroid dysfunction and common psychiatric disorders: A specific hormonal or a general autoimmune comorbidity. Thyroid, 33(2), 159-168. doi:10.1089/thy.2022.0304.

    Abstract

    Background: The hypothalamus-pituitary-thyroid axis coordinates brain development and postdevelopmental function. Thyroid hormone (TH) variations, even within the normal range, have been associated with the risk of developing common psychiatric disorders, although the underlying mechanisms remain poorly understood.

    Methods: To get new insight into the potentially shared mechanisms underlying thyroid dysfunction and psychiatric disorders, we performed a comprehensive analysis of multiple phenotypic and genotypic databases. We investigated the relationship of thyroid disorders with depression, bipolar disorder (BIP), and anxiety disorders (ANXs) in 497,726 subjects from U.K. Biobank. We subsequently investigated genetic correlations between thyroid disorders, thyrotropin (TSH), and free thyroxine (fT4) levels, with the genome-wide factors that predispose to psychiatric disorders. Finally, the observed global genetic correlations were furthermore pinpointed to specific local genomic regions.

    Results: Hypothyroidism was positively associated with an increased risk of major depressive disorder (MDD; OR = 1.31, p = 5.29 × 10−89), BIP (OR = 1.55, p = 0.0038), and ANX (OR = 1.16, p = 6.22 × 10−8). Hyperthyroidism was associated with MDD (OR = 1.11, p = 0.0034) and ANX (OR = 1.34, p = 5.99 × 10−⁶). Genetically, strong coheritability was observed between thyroid disease and both major depressive (rg = 0.17, p = 2.7 × 10−⁴) and ANXs (rg = 0.17, p = 6.7 × 10−⁶). This genetic correlation was particularly strong at the major histocompatibility complex locus on chromosome 6 (p < 10−⁵), but further analysis showed that other parts of the genome also contributed to this global effect. Importantly, neither TSH nor fT4 levels were genetically correlated with mood disorders.

    Conclusions: Our findings highlight an underlying association between autoimmune hypothyroidism and mood disorders, which is not mediated through THs and in which autoimmunity plays a prominent role. While these findings could shed new light on the potential ineffectiveness of treating (minor) variations in thyroid function in psychiatric disorders, further research is needed to identify the exact underlying molecular mechanisms.

    Additional information

    supplementary table S1
  • Soheili-Nezhad, S., Ibáñez-Solé, O., Izeta, A., Hoeijmakers, J. H. J., & Stoeger, T. (2024). Time is ticking faster for long genes in aging. Trends in Genetics, 40(4), 299-312. doi:10.1016/j.tig.2024.01.009.

    Abstract

    Recent studies of aging organisms have identified a systematic phenomenon, characterized by a negative correlation between gene length and their expression in various cell types, species, and diseases. We term this phenomenon gene-length-dependent transcription decline (GLTD) and suggest that it may represent a bottleneck in the transcription machinery and thereby significantly contribute to aging as an etiological factor. We review potential links between GLTD and key aging processes such as DNA damage and explore their potential in identifying disease modification targets. Notably, in Alzheimer’s disease, GLTD spotlights extremely long synaptic genes at chromosomal fragile sites (CFSs) and their vulnerability to postmitotic DNA damage. We suggest that GLTD is an integral element of biological aging.
  • Sollis, E., Den Hoed, J., Quevedo, M., Estruch, S. B., Vino, A., Dekkers, D. H. W., Demmers, J. A. A., Poot, R., Derizioti, P., & Fisher, S. E. (2023). Characterization of the TBR1 interactome: Variants associated with neurodevelopmental disorders disrupt novel protein interactions. Human Molecular Genetics, 32(9): ddac311, pp. 1497-1510. doi:10.1093/hmg/ddac311.

    Abstract

    TBR1 is a neuron-specific transcription factor involved in brain development and implicated in a neurodevelopmental disorder (NDD) combining features of autism spectrum disorder (ASD), intellectual disability (ID) and speech delay. TBR1 has been previously shown to interact with a small number of transcription factors and co-factors also involved in NDDs (including CASK, FOXP1/2/4 and BCL11A), suggesting that the wider TBR1 interactome may have a significant bearing on normal and abnormal brain development. Here we have identified approximately 250 putative TBR1-interaction partners by affinity purification coupled to mass spectrometry. As well as known TBR1-interactors such as CASK, the identified partners include transcription factors and chromatin modifiers, along with ASD- and ID-related proteins. Five interaction candidates were independently validated using bioluminescence resonance energy transfer assays. We went on to test the interaction of these candidates with TBR1 protein variants implicated in cases of NDD. The assays uncovered disturbed interactions for NDD-associated variants and identified two distinct protein-binding domains of TBR1 that have essential roles in protein–protein interaction.
  • Spiteri, E., Konopka, G., Coppola, G., Bomar, J., Oldham, M., Ou, J., Vernes, S. C., Fisher, S. E., Ren, B., & Geschwind, D. (2007). Identification of the transcriptional targets of FOXP2, a gene linked to speech and language, in developing human brain. American Journal of Human Genetics, 81(6), 1144-1157. doi:10.1086/522237.

    Abstract

    Mutations in FOXP2, a member of the forkhead family of transcription factor genes, are the only known cause of developmental speech and language disorders in humans. To date, there are no known targets of human FOXP2 in the nervous system. The identification of FOXP2 targets in the developing human brain, therefore, provides a unique tool with which to explore the development of human language and speech. Here, we define FOXP2 targets in human basal ganglia (BG) and inferior frontal cortex (IFC) by use of chromatin immunoprecipitation followed by microarray analysis (ChIP-chip) and validate the functional regulation of targets in vitro. ChIP-chip identified 285 FOXP2 targets in fetal human brain; statistically significant overlap of targets in BG and IFC indicates a core set of 34 transcriptional targets of FOXP2. We identified targets specific to IFC or BG that were not observed in lung, suggesting important regional and tissue differences in FOXP2 activity. Many target genes are known to play critical roles in specific aspects of central nervous system patterning or development, such as neurite outgrowth, as well as plasticity. Subsets of the FOXP2 transcriptional targets are either under positive selection in humans or differentially expressed between human and chimpanzee brain. This is the first ChIP-chip study to use human brain tissue, making the FOXP2-target genes identified in these studies important to understanding the pathways regulating speech and language in the developing human brain. These data provide the first insight into the functional network of genes directly regulated by FOXP2 in human brain and by evolutionary comparisons, highlighting genes likely to be involved in the development of human higher-order cognitive processes.
  • Stärk, K., Kidd, E., & Frost, R. L. A. (2023). Close encounters of the word kind: Attested distributional information boosts statistical learning. Language Learning, 73(2), 341-373. doi:10.1111/lang.12523.

    Abstract

    Statistical learning, the ability to extract regularities from input (e.g., in language), is likely supported by learners’ prior expectations about how component units co-occur. In this study, we investigated how adults’ prior experience with sublexical regularities in their native language influences performance on an empirical language learning task. Forty German-speaking adults completed a speech repetition task in which they repeated eight-syllable sequences from two experimental languages: one containing disyllabic words comprised of frequently occurring German syllable transitions (naturalistic words) and the other containing words made from unattested syllable transitions (non-naturalistic words). The participants demonstrated learning from both naturalistic and non-naturalistic stimuli. However, learning was superior for the naturalistic sequences, indicating that the participants had used their existing distributional knowledge of German to extract the naturalistic words faster and more accurately than the non-naturalistic words. This finding supports theories of statistical learning as a form of chunking, whereby frequently co-occurring units become entrenched in long-term memory.

    Additional information

    accessible summary appendix S1
  • Stewart, A., Holler, J., & Kidd, E. (2007). Shallow processing of ambiguous pronouns: Evidence for delay. Quarterly Journal of Experimental Psychology, 60, 1680-1696. doi:10.1080/17470210601160807.
  • Stivers, T., & Majid, A. (2007). Questioning children: Interactional evidence of implicit bias in medical interviews. Social Psychology Quarterly, 70(4), 424-441.

    Abstract

    Social psychologists have shown experimentally that implicit race bias can influence an individual's behavior. Implicit bias has been suggested to be more subtle and less subject to cognitive control than more explicit forms of racial prejudice. Little is known about how implicit bias is manifest in naturally occurring social interaction. This study examines the factors associated with physicians selecting children rather than parents to answer questions in pediatric interviews about routine childhood illnesses. Analysis of the data using a Generalized Linear Latent and Mixed Model demonstrates a significant effect of parent race and education on whether physicians select children to answer questions. Black children and Latino children of low-education parents are less likely to be selected to answer questions than their same aged white peers irrespective of education. One way that implicit bias manifests itself in naturally occurring interaction may be through the process of speaker selection during questioning.
  • Stivers, T., Rossi, G., & Chalfoun, A. (2023). Ambiguities in action ascription. Social Forces, 101(3), 1552-1579. doi:10.1093/sf/soac021.

    Abstract

    In everyday interactions with one another, speakers not only say things but also do things like offer, complain, reject, and compliment. Through observation, it is possible to see that much of the time people unproblematically understand what others are doing. Research on conversation has further documented how speakers’ word choice, prosody, grammar, and gesture all help others to recognize what actions they are performing. In this study, we rely on spontaneous naturally occurring conversational data where people have trouble making their actions understood to examine what leads to ambiguous actions, bringing together prior research and identifying recurrent types of ambiguity that hinge on different dimensions of social action. We then discuss the range of costs and benefits for social actors when actions are clear versus ambiguous. Finally, we offer a conceptual model of how, at a microlevel, action ascription is done. Actions in interaction are building blocks for social relations; at each turn, an action can strengthen or strain the bond between two individuals. Thus, a unified theory of action ascription at a microlevel is an essential component for broader theories of social action and of how social actions produce, maintain, and revise the social world.
  • Stivers, T., Chalfoun, A., & Rossi, G. (2024). To err is human but to persist is diabolical: Toward a theory of interactional policing. Frontiers in Sociology: Sociological Theory, 9: 1369776. doi:10.3389/fsoc.2024.1369776.

    Abstract

    Social interaction is organized around norms and preferences that guide our construction of actions and our interpretation of those of others, creating a reflexive moral order. Sociological theory suggests two possibilities for the type of moral order that underlies the policing of interactional norm and preference violations: a morality that focuses on the nature of violations themselves and a morality that focuses on the positioning of actors as they maintain their conduct comprehensible, even when they depart from norms and preferences. We find that actors are more likely to reproach interactional violations for which an account is not provided by the transgressor, and that actors weakly reproach or let pass first offenses while more strongly policing violators who persist in bad behavior. Based on these findings, we outline a theory of interactional policing that rests not on the nature of the violation but rather on actors' moral positioning.
  • Swingley, D., & Aslin, R. N. (2007). Lexical competition in young children's word learning. Cognitive Psychology, 54(2), 99-132.

    Abstract

    In two experiments, 1.5-year-olds were taught novel words whose sound patterns were phonologically similar to familiar words (novel neighbors) or were not (novel nonneighbors). Learning was tested using a picture-fixation task. In both experiments, children learned the novel nonneighbors but not the novel neighbors. In addition, exposure to the novel neighbors impaired recognition performance on familiar neighbors. Finally, children did not spontaneously use phonological differences to infer that a novel word referred to a novel object. Thus, lexical competition—inhibitory interaction among words in speech comprehension—can prevent children from using their full phonological sensitivity in judging words as novel. These results suggest that word learning in young children, as in adults, relies not only on the discrimination and identification of phonetic categories, but also on evaluating the likelihood that an utterance conveys a new word.
  • Swingley, D. (2007). Lexical exposure and word-from encoding in 1.5-year-olds. Developmental Psychology, 43(2), 454-464. doi:10.1037/0012-1649.43.2.454.

    Abstract

    In this study, 1.5-year-olds were taught a novel word. Some children were familiarized with the word's phonological form before learning the word's meaning. Fidelity of phonological encoding was tested in a picture-fixation task using correctly pronounced and mispronounced stimuli. Only children with additional exposure in familiarization showed reduced recognition performance given slight mispronunciations relative to correct pronunciations; children with fewer exposures did not. Mathematical modeling of vocabulary exposure indicated that children may hear thousands of words frequently enough for accurate encoding. The results provide evidence compatible with partial failure of phonological encoding at 19 months of age, demonstrate that this limitation in learning does not always hinder word recognition, and show the value of infants' word-form encoding in early lexical development.
  • Swingley, D., & Aslin, R. N. (2000). Spoken word recognition and lexical representation in very young children. Cognition, 76, 147-166. doi:10.1016/S0010-0277(00)00081-0.

    Abstract

    Although children's knowledge of the sound patterns of words has been a focus of debate for many years, little is known about the lexical representations very young children use in word recognition. In particular, researchers have questioned the degree of specificity encoded in early lexical representations. The current study addressed this issue by presenting 18–23-month-olds with object labels that were either correctly pronounced, or mispronounced. Mispronunciations involved replacement of one segment with a similar segment, as in ‘baby–vaby’. Children heard sentences containing these words while viewing two pictures, one of which was the referent of the sentence. Analyses of children's eye movements showed that children recognized the spoken words in both conditions, but that recognition was significantly poorer when words were mispronounced. The effects of mispronunciation on recognition were unrelated to age or to spoken vocabulary size. The results suggest that children's representations of familiar words are phonetically well-specified, and that this specification may not be a consequence of the need to differentiate similar words in production.
  • Swinney, D. A., & Cutler, A. (1979). The access and processing of idiomatic expressions. Journal of Verbal Learning an Verbal Behavior, 18, 523-534. doi:10.1016/S0022-5371(79)90284-6.

    Abstract

    Two experiments examined the nature of access, storage, and comprehension of idiomatic phrases. In both studies a Phrase Classification Task was utilized. In this, reaction times to determine whether or not word strings constituted acceptable English phrases were measured. Classification times were significantly faster to idiom than to matched control phrases. This effect held under conditions involving different categories of idioms, different transitional probabilities among words in the phrases, and different levels of awareness of the presence of idioms in the materials. The data support a Lexical Representation Hypothesis for the processing of idioms.
  • Takashima, A., Nieuwenhuis, I. L. C., Rijpkema, M., Petersson, K. M., Jensen, O., & Fernández, G. (2007). Memory trace stabilization leads to large-scale changes in the retrieval network: A functional MRI study on associative memory. Learning & Memory, 14, 472-479. doi:10.1101/lm.605607.

    Abstract

    Spaced learning with time to consolidate leads to more stabile memory traces. However, little is known about the neural correlates of trace stabilization, especially in humans. The present fMRI study contrasted retrieval activity of two well-learned sets of face-location associations, one learned in a massed style and tested on the day of learning (i.e., labile condition) and another learned in a spaced scheme over the course of one week (i.e., stabilized condition). Both sets of associations were retrieved equally well, but the retrieval of stabilized association was faster and accompanied by large-scale changes in the network supporting retrieval. Cued recall of stabilized as compared with labile associations was accompanied by increased activity in the precuneus, the ventromedial prefrontal cortex, the bilateral temporal pole, and left temporo–parietal junction. Conversely, memory representational areas such as the fusiform gyrus for faces and the posterior parietal cortex for locations did not change their activity with stabilization. The changes in activation in the precuneus, which also showed increased connectivity with the fusiform area, are likely to be related to the spatial nature of our task. The activation increase in the ventromedial prefrontal cortex, on the other hand, might reflect a general function in stabilized memory retrieval. This area might succeed the hippocampus in linking distributed neocortical representations.
  • Takashima, A., Carota, F., Schoots, V., Redmann, A., Jehee, J., & Indefrey, P. (2024). Tomatoes are red: The perception of achromatic objects elicits retrieval of associated color knowledge. Journal of Cognitive Neuroscience, 36(1), 24-45. doi:10.1162/jocn_a_02068.

    Abstract

    When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.
  • Tamaoka, K., Sakai, H., Miyaoka, Y., Ono, H., Fukuda, M., Wu, Y., & Verdonschot, R. G. (2023). Sentential inference bridging between lexical/grammatical knowledge and text comprehension among native Chinese speakers learning Japanese. PLoS One, 18(4): e0284331. doi:10.1371/journal.pone.0284331.

    Abstract

    The current study explored the role of sentential inference in connecting lexical/grammatical knowledge and overall text comprehension in foreign language learning. Using structural equation modeling (SEM), causal relationships were examined between four latent variables: lexical knowledge, grammatical knowledge, sentential inference, and text comprehension. The study analyzed 281 Chinese university students learning Japanese as a second language and compared two causal models: (1) the partially-mediated model, which suggests that lexical knowledge, grammatical knowledge, and sentential inference concurrently influence text comprehension, and (2) the wholly-mediated model, which posits that both lexical and grammatical knowledge impact sentential inference, which then further affects text comprehension. The SEM comparison analysis supported the wholly-mediated model, showing sequential causal relationships from lexical knowledge to sentential inference and then to text comprehension, without significant contribution from grammatical knowledge. The results indicate that sentential inference serves as a crucial bridge between lexical knowledge and text comprehension.
  • Tamaoka, K., Zhang, J., Koizumi, M., & Verdonschot, R. G. (2023). Phonological encoding in Tongan: An experimental investigation. Quarterly Journal of Experimental Psychology, 76(10), 2197-2430. doi:10.1177/17470218221138770.

    Abstract

    This study is the first to report chronometric evidence on Tongan language production. It has been speculated that the mora plays an important role during Tongan phonological encoding. A mora follows the (C)V form, so /a/ and /ka/ (but not /k/) denote a mora in Tongan. Using a picture-word naming paradigm, Tongan native speakers named pictures containing superimposed non-word distractors. This task has been used before in Japanese, Korean, and Vietnamese to investigate the initially selected unit during phonological encoding (IPU). Compared to control distractors, both onset and mora overlapping distractors resulted in faster naming latencies. Several alternative explanations for the pattern of results - proficiency in English, knowledge of Latin script, and downstream effects - are discussed. However, we conclude that Tongan phonological encoding likely natively uses the phoneme, and not the mora, as the IPU..

    Additional information

    supplemental material
  • Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.

    Abstract

    This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.

    Additional information

    appendix 1-3
  • Tanenhaus, M. K., Magnuson, J. S., Dahan, D., & Chaimbers, G. (2000). Eye movements and lexical access in spoken-language comprehension: evaluating a linking hypothesis between fixations and linguistic processing. Journal of Psycholinguistic Research, 29, 557-580. doi:10.1023/A:1026464108329.

    Abstract

    A growing number of researchers in the sentence processing community are using eye movements to address issues in spoken language comprehension. Experiments using this paradigm have shown that visually presented referential information, including properties of referents relevant to specific actions, influences even the earliest moments of syntactic processing. Methodological concerns about task-specific strategies and the linking hypothesis between eye movements and linguistic processing are identified and discussed. These concerns are addressed in a review of recent studies of spoken word recognition which introduce and evaluate a detailed linking hypothesis between eye movements and lexical access. The results provide evidence about the time course of lexical activation that resolves some important theoretical issues in spoken-word recognition. They also demonstrate that fixations are sensitive to properties of the normal language-processing system that cannot be attributed to task-specific strategies
  • Tatsumi, T., & Sala, G. (2023). Learning conversational dependency: Children’s response usingunin Japanese. Journal of Child Language, 50(5), 1226-1244. doi:10.1017/S0305000922000344.

    Abstract

    This study investigates how Japanese-speaking children learn interactional dependencies in conversations that determine the use of un, a token typically used as a positive response for yes-no questions, backchannel, and acknowledgement. We hypothesise that children learn to produce un appropriately by recognising different types of cues occurring in the immediately preceding turns. We built a set of generalised linear models on the longitudinal conversation data from seven children aged 1 to 5 years and their caregivers. Our models revealed that children not only increased their un production, but also learned to attend relevant cues in the preceding turns to understand when to respond by producing un. Children increasingly produced un when their interlocutors asked a yes-no question or signalled the continuation of their own speech. These results illustrate how children learn the probabilistic dependency between adjacent turns, and become able to participate in conversational interactions.
  • Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. PNAS, 121(3): e2320489121. doi:10.1073/pnas.2320489121.

    Abstract

    Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Tendolkar, I., Arnold, J., Petersson, K. M., Weis, S., Brockhaus-Dumke, A., Van Eijndhoven, P., Buitelaar, J., & Fernández, G. (2007). Probing the neural correlates of associative memory formation: A parametrically analyzed event-related functional MRI study. Brain Research, 1142, 159-168. doi:10.1016/j.brainres.2007.01.040.

    Abstract

    The medial temporal lobe (MTL) is crucial for declarative memory formation, but the function of its subcomponents in associative memory formation remains controversial. Most functional imaging studies on this topic are based on a stepwise approach comparing a condition with and one without associative encoding. Extending this approach we applied additionally a parametric analysis by varying the amount of associative memory formation. We found a hippocampal subsequent memory effect of almost similar magnitude regardless of the amount of associations formed. By contrast, subsequent memory effects in rhinal and parahippocampal cortices were parametrically and positively modulated by the amount of associations formed. Our results indicate that the parahippocampal region supports associative memory formation as tested here and the hippocampus adds a general mnemonic operation. This pattern of results might suggest a new interpretation. Instead of having either a fixed division of labor between the hippocampus (associative memory formation) and the rhinal cortex (non-associative memory formation) or a functionally unitary MTL system, in which all substructures are contributing to memory formation in a similar way, we propose that the location where associations are formed within the MTL depends on the kind of associations bound: If visual single-dimension associations, as used here, can already be integrated within the parahippocampal region, the hippocampus might add a general purpose mnemonic operation only. In contrast, if associations have to be formed across widely distributed neocortical representations, the hippocampus may provide a binding operation in order to establish a coherent memory.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Terrill, A. (2007). [Review of ‘Andrew Pawley, Robert Attenborough, Jack Golson, and Robin Hide, eds. 2005. Papuan pasts: Cultural, linguistic and biological histories of Papuan-speaking people]. Oceanic Linguistics, 46(1), 313-321. doi:10.1353/ol.2007.0025.
  • Tezcan, F., Weissbart, H., & Martin, A. E. (2023). A tradeoff between acoustic and linguistic feature encoding in spoken language comprehension. eLife, 12: e82386. doi:10.7554/eLife.82386.

    Abstract

    When we comprehend language from speech, the phase of the neural response aligns with particular features of the speech input, resulting in a phenomenon referred to as neural tracking. In recent years, a large body of work has demonstrated the tracking of the acoustic envelope and abstract linguistic units at the phoneme and word levels, and beyond. However, the degree to which speech tracking is driven by acoustic edges of the signal, or by internally-generated linguistic units, or by the interplay of both, remains contentious. In this study, we used naturalistic story-listening to investigate (1) whether phoneme-level features are tracked over and above acoustic edges, (2) whether word entropy, which can reflect sentence- and discourse-level constraints, impacted the encoding of acoustic and phoneme-level features, and (3) whether the tracking of acoustic edges was enhanced or suppressed during comprehension of a first language (Dutch) compared to a statistically familiar but uncomprehended language (French). We first show that encoding models with phoneme-level linguistic features, in addition to acoustic features, uncovered an increased neural tracking response; this signal was further amplified in a comprehended language, putatively reflecting the transformation of acoustic features into internally generated phoneme-level representations. Phonemes were tracked more strongly in a comprehended language, suggesting that language comprehension functions as a neural filter over acoustic edges of the speech signal as it transforms sensory signals into abstract linguistic units. We then show that word entropy enhances neural tracking of both acoustic and phonemic features when sentence- and discourse-context are less constraining. When language was not comprehended, acoustic features, but not phonemic ones, were more strongly modulated, but in contrast, when a native language is comprehended, phoneme features are more strongly modulated. Taken together, our findings highlight the flexible modulation of acoustic, and phonemic features by sentence and discourse-level constraint in language comprehension, and document the neural transformation from speech perception to language comprehension, consistent with an account of language processing as a neural filter from sensory to abstract representations.
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Tkalcec, A., Bierlein, M., Seeger‐Schneider, G., Walitza, S., Jenny, B., Menks, W. M., Felhbaum, L. V., Borbas, R., Cole, D. M., Raschle, N., Herbrecht, E., Stadler, C., & Cubillo, A. (2023). Empathy deficits, callous‐unemotional traits and structural underpinnings in autism spectrum disorder and conduct disorder youth. Autism Research, 16(10), 1946-1962. doi:10.1002/aur.2993.

    Abstract

    Distinct empathy deficits are often described in patients with conduct disorder (CD) and autism spectrum disorder (ASD) yet their neural underpinnings and the influence of comorbid Callous-Unemotional (CU) traits are unclear. This study compares the cognitive (CE) and affective empathy (AE) abilities of youth with CD and ASD, their potential neuroanatomical correlates, and the influence of CU traits on empathy. Adolescents and parents/caregivers completed empathy questionnaires (N = 148 adolescents, mean age = 15.16 years) and T1 weighted images were obtained from a subsample (N = 130). Group differences in empathy and the influence of CU traits were investigated using Bayesian analyses and Voxel-Based Morphometry with Threshold-Free Cluster Enhancement focusing on regions involved in AE (insula, amygdala, inferior frontal gyrus and cingulate cortex) and CE processes (ventromedial prefrontal cortex, temporoparietal junction, superior temporal gyrus, and precuneus). The ASD group showed lower parent-reported AE and CE scores and lower self-reported CE scores while the CD group showed lower parent-reported CE scores than controls. When accounting for the influence of CU traits no AE deficits in ASD and CE deficits in CD were found, but CE deficits in ASD remained. Across all participants, CU traits were negatively associated with gray matter volumes in anterior cingulate which extends into the mid cingulate, ventromedial prefrontal cortex, and precuneus. Thus, although co-occurring CU traits have been linked to global empathy deficits in reports and underlying brain structures, its influence on empathy aspects might be disorder-specific. Investigating the subdimensions of empathy may therefore help to identify disorder-specific empathy deficits.
  • Tomasek, M., Ravignani, A., Boucherie, P. H., Van Meyel, S., & Dufour, V. (2023). Spontaneous vocal coordination of vocalizations to water noise in rooks (Corvus frugilegus): An exploratory study. Ecology and Evolution, 13(2): e9791. doi:10.1002/ece3.9791.

    Abstract

    The ability to control one's vocal production is a major advantage in acoustic communication. Yet, not all species have the same level of control over their vocal output. Several bird species can interrupt their song upon hearing an external stimulus, but there is no evidence how flexible this behavior is. Most research on corvids focuses on their cognitive abilities, but few studies explore their vocal aptitudes. Recent research shows that crows can be experimentally trained to vocalize in response to a brief visual stimulus. Our study investigated vocal control abilities with a more ecologically embedded approach in rooks. We show that two rooks could spontaneously coordinate their vocalizations to a long-lasting stimulus (the sound of their small bathing pool being filled with a water hose), one of them adjusting roughly (in the second range) its vocalizations as the stimuli began and stopped. This exploratory study adds to the literature showing that corvids, a group of species capable of cognitive prowess, are indeed able to display good vocal control abilities.
  • Tomasello, M., Carpenter, M., & Liszkowski, U. (2007). A new look at infant pointing. Child Development, 78, 705-722. doi:10.1111/j.1467-8624.2007.01025.x.

    Abstract

    The current article proposes a new theory of infant pointing involving multiple layers of intentionality and shared intentionality. In the context of this theory, evidence is presented for a rich interpretation of prelinguistic communication, that is, one that posits that when 12-month-old infants point for an adult they are in some sense trying to influence her mental states. Moreover, evidence is also presented for a deeply social view in which infant pointing is best understood—on many levels and in many ways—as depending on uniquely human skills and motivations for cooperation and shared intentionality (e.g., joint intentions and attention with others). Children's early linguistic skills are built on this already existing platform of prelinguistic communication.
  • Trujillo, J. P., & Holler, J. (2023). Interactionally embedded gestalt principles of multimodal human communication. Perspectives on Psychological Science, 18(5), 1136-1159. doi:10.1177/17456916221141422.

    Abstract

    Natural human interaction requires us to produce and process many different signals, including speech, hand and head gestures, and facial expressions. These communicative signals, which occur in a variety of temporal relations with each other (e.g., parallel or temporally misaligned), must be rapidly processed as a coherent message by the receiver. In this contribution, we introduce the notion of interactionally embedded, affordance-driven gestalt perception as a framework that can explain how this rapid processing of multimodal signals is achieved as efficiently as it is. We discuss empirical evidence showing how basic principles of gestalt perception can explain some aspects of unimodal phenomena such as verbal language processing and visual scene perception but require additional features to explain multimodal human communication. We propose a framework in which high-level gestalt predictions are continuously updated by incoming sensory input, such as unfolding speech and visual signals. We outline the constituent processes that shape high-level gestalt perception and their role in perceiving relevance and prägnanz. Finally, we provide testable predictions that arise from this multimodal interactionally embedded gestalt-perception framework. This review and framework therefore provide a theoretically motivated account of how we may understand the highly complex, multimodal behaviors inherent in natural social interaction.
  • Trujillo, J. P., Dideriksen, C., Tylén, K., Christiansen, M. H., & Fusaroli, R. (2023). The dynamic interplay of kinetic and linguistic coordination in Danish and Norwegian conversation. Cognitive Science, 47(6): e13298. doi:10.1111/cogs.13298.

    Abstract

    In conversation, individuals work together to achieve communicative goals, complementing and aligning language and body with each other. An important emerging question is whether interlocutors entrain with one another equally across linguistic levels (e.g., lexical, syntactic, and semantic) and modalities (i.e., speech and gesture), or whether there are complementary patterns of behaviors, with some levels or modalities diverging and others converging in coordinated fashions. This study assesses how kinematic and linguistic entrainment interact with one another across levels of measurement, and according to communicative context. We analyzed data from two matched corpora of dyadic interaction between—respectively—Danish and Norwegian native speakers engaged in affiliative conversations and task-oriented conversations. We assessed linguistic entrainment at the lexical, syntactic, and semantic level, and kinetic alignment of the head and hands using video-based motion tracking and dynamic time warping. We tested whether—across the two languages—linguistic alignment correlates with kinetic alignment, and whether these kinetic-linguistic associations are modulated either by the type of conversation or by the language spoken. We found that kinetic entrainment was positively associated with low-level linguistic (i.e., lexical) entrainment, while negatively associated with high-level linguistic (i.e., semantic) entrainment, in a cross-linguistically robust way. Our findings suggest that conversation makes use of a dynamic coordination of similarity and complementarity both between individuals as well as between different communicative modalities, and provides evidence for a multimodal, interpersonal synergy account of interaction.
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Trupp, M. D., Bignardi, G., Specker, E., Vessel, E. A., & Pelowski, M. (2023). Who benefits from online art viewing, and how: The role of pleasure, meaningfulness, and trait aesthetic responsiveness in computer-based art interventions for well-being. Computers in Human Behavior, 145: 107764. doi:10.1016/j.chb.2023.107764.

    Abstract

    When experienced in-person, engagement with art has been associated with positive outcomes in well-being and mental health. However, especially in the last decade, art viewing, cultural engagement, and even ‘trips’ to museums have begun to take place online, via computers, smartphones, tablets, or in virtual reality. Similarly, to what has been reported for in-person visits, online art engagements—easily accessible from personal devices—have also been associated to well-being impacts. However, a broader understanding of for whom and how online-delivered art might have well-being impacts is still lacking. In the present study, we used a Monet interactive art exhibition from Google Arts and Culture to deepen our understanding of the role of pleasure, meaning, and individual differences in the responsiveness to art. Beyond replicating the previous group-level effects, we confirmed our pre-registered hypothesis that trait-level inter-individual differences in aesthetic responsiveness predict some of the benefits that online art viewing has on well-being and further that such inter-individual differences at the trait level were mediated by subjective experiences of pleasure and especially meaningfulness felt during the online-art intervention. The role that participants' experiences play as a possible mechanism during art interventions is discussed in light of recent theoretical models.

    Additional information

    supplementary material
  • Ünal, E., Mamus, E., & Özyürek, A. (2023). Multimodal encoding of motion events in speech, gesture, and cognition. Language and Cognition. Advance online publication. doi:10.1017/langcog.2023.61.

    Abstract

    How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner.
  • Uzbas, F., & O’Neill, A. (2023). Spatial Centrosome Proteomic Profiling of Human iPSC-derived Neural Cells. BIO-PROTOCOL, 13(17): e4812. doi:10.21769/BioProtoc.4812.

    Abstract

    The centrosome governs many pan-cellular processes including cell division, migration, and cilium formation.
    However, very little is known about its cell type-specific protein composition and the sub-organellar domains where
    these protein interactions take place. Here, we outline a protocol for the spatial interrogation of the centrosome
    proteome in human cells, such as those differentiated from induced pluripotent stem cells (iPSCs), through co-
    immunoprecipitation of protein complexes around selected baits that are known to reside at different structural parts
    of the centrosome, followed by mass spectrometry. The protocol describes expansion and differentiation of human
    iPSCs to dorsal forebrain neural progenitors and cortical projection neurons, harvesting and lysis of cells for protein
    isolation, co-immunoprecipitation with antibodies against selected bait proteins, preparation for mass spectrometry,
    processing the mass spectrometry output files using MaxQuant software, and statistical analysis using Perseus
    software to identify the enriched proteins by each bait. Given the large number of cells needed for the isolation of
    centrosome proteins, this protocol can be scaled up or down by modifying the number of bait proteins and can also
    be carried out in batches. It can potentially be adapted for other cell types, organelles, and species as well.
  • van der Burght, C. L., Numssen, O., Schlaak, B., Goucha, T., & Hartwigsen, G. (2023). Differential contributions of inferior frontal gyrus subregions to sentence processing guided by intonation. Human Brain Mapping, 44(2), 585-598. doi:10.1002/hbm.26086.

    Abstract

    Auditory sentence comprehension involves processing content (semantics), grammar (syntax), and intonation (prosody). The left inferior frontal gyrus (IFG) is involved in sentence comprehension guided by these different cues, with neuroimaging studies preferentially locating syntactic and semantic processing in separate IFG subregions. However, this regional specialisation and its functional relevance has yet to be confirmed. This study probed the role of the posterior IFG (pIFG) for syntactic processing and the anterior IFG (aIFG) for semantic processing with repetitive transcranial magnetic stimulation (rTMS) in a task that required the interpretation of the sentence’s prosodic realisation. Healthy participants performed a sentence completion task with syntactic and semantic decisions, while receiving 10 Hz rTMS over either left aIFG, pIFG, or vertex (control). Initial behavioural analyses showed an inhibitory effect on accuracy without task-specificity. However, electrical field simulations revealed differential effects for both subregions. In the aIFG, stronger stimulation led to slower semantic processing, with no effect of pIFG stimulation. In contrast, we found a facilitatory effect on syntactic processing in both aIFG and pIFG, where higher stimulation strength was related to faster responses. Our results provide first evidence for the functional relevance of left aIFG in semantic processing guided by intonation. The stimulation effect on syntactic responses emphasises the importance of the IFG for syntax processing, without supporting the hypothesis of a pIFG-specific involvement. Together, the results support the notion of functionally specialised IFG subregions for diverse but fundamental cues for language processing.

    Additional information

    supplementary information
  • Van Hoey, T., Thompson, A. L., Do, Y., & Dingemanse, M. (2023). Iconicity in ideophones: Guessing, memorizing, and reassessing. Cognitive Science, 47(4): e13268. doi:10.1111/cogs.13268.

    Abstract

    Iconicity, or the resemblance between form and meaning, is often ascribed to a special status and contrasted with default assumptions of arbitrariness in spoken language. But does iconicity in spoken language have a special status when it comes to learnability? A simple way to gauge learnability is to see how well something is retrieved from memory. We can further contrast this with guessability, to see (1) whether the ease of guessing the meanings of ideophones outperforms the rate at which they are remembered; and (2) how willing participants’ are to reassess what they were taught in a prior task—a novel contribution of this study. We replicate prior guessing and memory tasks using ideophones and adjectives from Japanese, Korean, and Igbo. Our results show that although native Cantonese speakers guessed ideophone meanings above chance level, they memorized both ideophones and adjectives with comparable accuracy. However, response time data show that participants took significantly longer to respond correctly to adjective–meaning pairs—indicating a discrepancy in a cognitive effort that favored the recognition of ideophones. In a follow-up reassessment task, participants who were taught foil translations were more likely to choose the true translations for ideophones rather than adjectives. By comparing the findings from our guessing and memory tasks, we conclude that iconicity is more accessible if a task requires participants to actively seek out sound-meaning associations.
  • Van Wonderen, E., & Nieuwland, M. S. (2023). Lexical prediction does not rationally adapt to prediction error: ERP evidence from pre-nominal articles. Journal of Memory and Language, 132: 104435. doi:10.1016/j.jml.2023.104435.

    Abstract

    People sometimes predict upcoming words during language comprehension, but debate remains on when and to what extent such predictions indeed occur. The rational adaptation hypothesis holds that predictions develop with expected utility: people predict more strongly when predictions are frequently confirmed (low prediction error) rather than disconfirmed. However, supporting evidence is mixed thus far and has only involved measuring responses to supposedly predicted nouns, not to preceding articles that may also be predicted. The current, large-sample (N = 200) ERP study on written discourse comprehension in Dutch therefore employs the well-known ‘pre-nominal prediction effect’: enhanced N400-like ERPs for articles that are unexpected given a likely upcoming noun’s gender (i.e., the neuter gender article ‘het’ when people expect the common gender noun phrase ‘de krant’, the newspaper) compared to expected articles. We investigated whether the pre-nominal prediction effect is larger when most of the presented stories contain predictable article-noun combinations (75% predictable, 25% unpredictable) compared to when most stories contain unpredictable combinations (25% predictable, 75% unpredictable). Our results show the pre-nominal prediction effect in both contexts, with little evidence to suggest that this effect depended on the percentage of predictable combinations. Moreover, the little evidence suggesting such a dependence was primarily observed for unexpected, neuter-gender articles (‘het’), which is inconsistent with the rational adaptation hypothesis. In line with recent demonstrations (Nieuwland, 2021a,b), our results suggest that linguistic prediction is less ‘rational’ or Bayes optimal than is often suggested.
  • Van Berkum, J. J. A. (1986). De cognitieve psychologie op zoek naar grondslagen. Kennis en Methode: Tijdschrift voor wetenschapsfilosofie en methodologie, X, 348-360.
  • Van Wijk, C., & Kempen, G. (1982). De ontwikkeling van syntactische formuleervaardigheid bij kinderen van 9 tot 16 jaar. Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden, 37(8), 491-509.

    Abstract

    An essential phenomenon in the development towards syntactic maturity after early childhood is the increasing use of so-called sentence-combining transformations. Especially by using subordination, complex sentences are produced. The research reported here is an attempt to arrive at a more adequate characterization and explanation. Our starting point was an analysis of 280 texts written by Dutch-speaking pupils of the two highest grades of the primary school and the four lowest grades of three different types of secondary education. It was examined whether systematic shifts in the use of certain groups of so-called function words could be traced. We concluded that the development of the syntactic formulating ability can be characterized as an increase in connectivity: the use of all kinds of function words which explicitly mark logico-semantic relations between propositions. This development starts by inserting special adverbs and coordinating conjunctions resulting in various types of coordination. In a later stage, the syntactic patterning of the sentence is affected as well (various types of subordination). The increase in sentence complexity is only one aspect of the entire development. An explanation for the increase in connectivity is offered based upon a distinction between narrative and expository language use. The latter, but not the former, is characterized by frequent occurrence of connectives. The development in syntactic formulating ability includes a high level of skill in expository language use. Speed of development is determined by intensity of training, e.g. in scholastic and occupational settings.
  • Van Berkum, J. J. A. (1986). Doordacht gevoel: Emoties als informatieverwerking. De Psycholoog, 21(9), 417-423.
  • Van Berkum, J. J. A., Koornneef, A. W., Otten, M., & Nieuwland, M. S. (2007). Establishing reference in language comprehension: An electrophysiological perspective. Brain Research, 1146, 158-171. doi:10.1016/j.brainres.2006.06.091.

    Abstract

    The electrophysiology of language comprehension has long been dominated by research on syntactic and semantic integration. However, to understand expressions like "he did it" or "the little girl", combining word meanings in accordance with semantic and syntactic constraints is not enough--readers and listeners also need to work out what or who is being referred to. We review our event-related brain potential research on the processes involved in establishing reference, and present a new experiment in which we examine when and how the implicit causality associated with specific interpersonal verbs affects the interpretation of a referentially ambiguous pronoun. The evidence suggests that upon encountering a singular noun or pronoun, readers and listeners immediately inspect their situation model for a suitable discourse entity, such that they can discriminate between having too many, too few, or exactly the right number of referents within at most half a second. Furthermore, our implicit causality findings indicate that a fragment like "David praised Linda because..." can immediately foreground a particular referent, to the extent that a subsequent "he" is at least initially construed as a syntactic error. In all, our brain potential findings suggest that referential processing is highly incremental, and not necessarily contingent upon the syntax. In addition, they demonstrate that we can use ERPs to relatively selectively keep track of how readers and listeners establish reference.
  • Van Wingen, G., Van Broekhoven, F., Verkes, R. J., Petersson, K. M., Bäckström, T., Buitelaar, J., & Fernández, G. (2007). How progesterone impairs memory for biologically salient stimuli in healthy young women. Journal of Neuroscience, 27(42), 11416-11423. doi:10.1523/JNEUROSCI.1715-07.2007.

    Abstract

    Progesterone, or rather its neuroactive metabolite allopregnanolone, modulates amygdala activity and thereby influences anxiety. Cognition and, in particular, memory are also altered by allopregnanolone. In the present study, we investigated whether allopregnanolone modulates memory for biologically salient stimuli by influencing amygdala activity, which in turn may affect neural processes in other brain regions. A single progesterone dose was administered orally to healthy young women in a double-blind, placebo-controlled, crossover design, and participants were asked to memorize and recognize faces while undergoing functional magnetic resonance imaging. Progesterone decreased recognition accuracy without affecting reaction times. The imaging results show that the amygdala, hippocampus, and fusiform gyrus supported memory formation. Importantly, progesterone decreased responses to faces in the amygdala and fusiform gyrus duringmemoryencoding, whereas it increased hippocampal responses. The progesterone-induced decrease in neural activity in the amygdala and fusiform gyrus predicted the decrease in memory performance across subjects. However, progesterone did not modulate the differential activation between subsequently remembered and subsequently forgotten faces in these areas. A similar pattern of results was observed in the fusiform gyrus and prefrontal cortex during memory retrieval. These results suggest that allopregnanolone impairs memory by reducing the recruitment of those brain regions that support memory formation and retrieval. Given the important role of the amygdala in the modulation of memory, these results suggest that allopregnanolone alters memory by influencing amygdala activity, which in turn may affect memory processes in other brain regions
  • Van Valin Jr., R. D. (2007). Some thoughts on the reason for the lesser status of typology in the USA as opposed to Europe. Linguistic Typology, 11(1), 253-257. doi:10.1515/LINGTY.2007.019.

    Abstract

    This article addresses the issue of the different status that typology has in American linguistics as opposed to European linguistics. The historical roots of the difference lie in both structural and generative linguistics, in the contrasts between post-Bloomfieldian structuralism in the US vs. Praguean structuralism in Europe, and in the extent of the influence of generative grammar on the two continents.
  • Van Wijk, C., & Kempen, G. (1982). Syntactische formuleervaardigheid en het schrijven van opstellen. Pedagogische Studiën, 59, 126-136.

    Abstract

    Meermalen is getracht om syntactische formuleenuuirdigheid direct en objectief te meten aan de hand van gesproken of geschreven teksten. Uitgangspunt hierbij vormde in de regel de syntactische complexiteit van de geproduceerde taaluitingen. Dit heeft echter niet geleid tot een plausibele, duidelijk omschreven en praktisch bruikbare index. N.a.v. een kritische bespreking van de notie complexiteit wordt in dit artikel als nieuw criterium voorgesteld de connectiviteit van de taaluitingen; de expliciete aanduiding van logiscli-scmantische relaties tussen proposities. Connectiviteit is gemakkelijk scoorbaar aan de hand van functiewoorden die verschillende vormen van nevenschikkend en onderschikkend zinsverband markeren. Deze nieuwe index ondetrangt de kritiek die op complexiteit gegeven kon worden, blijkt duidelijk te discrimineren tussen groepen leerlingen die van elkaar verschillen naar leeftijd en opleidingsniveau, en sluit aan bij recente taalpsychologische en sociolinguïstische theorie. Tot besluit worden enige onderwijskundige implicaties aangegeven.
  • Van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (2000). The use of referential context and grammatical gender in parsing: A reply to Brysbaert and Mitchell. Journal of Psycholinguistic Research, 29(5), 467-481. doi:10.1023/A:1005168025226.

    Abstract

    Based on the results of an event-related brain potentials (ERP) experiment (van Berkum, Brown, & Hagoort. 1999a, b), we have recently argued that discourse-level referential context can be taken into account extremely rapidly by the parser. Moreover, our ERP results indicated that local grammatical gender information, although available within a few hundred milliseconds from word onset, is not always used quickly enough to prevent the parser from considering a discourse-supported, but agreement-violating, syntactic analysis. In a comment on our work, Brysbaert and Mitchell (2000) have raised concerns about the methodology of our ERP experiment and have challenged our interpretation of the results. In this reply, we argue that these concerns are unwarranted and, that, in contrast to our own interpretation, the alternative explanations provided by Brysbaert and Mitchell do not account for the full pattern of ERP results.
  • Van Uytvanck, D., Dukers, A., Ringersma, J., & Wittenburg, P. (2007). Using Google Earth to access language resources. Language Archive Newsletter, (9), 4-7.

    Abstract

    Over the past ten years Geographic Information Systems (GIS) have evolved from a highly specialised niche technology to one that is used daily by a wide range of people. This article describes geographic browsing of language archives, which provides intuitive exploration of resources and permits integration and correlation of information from different archives, even across different research disciplines. In order to facilitate both exploration and management of resources, digital language archives are organised according to criteria such as language name, research topic, project information, researchers, countries, or genres. A set of such criteria can form a tree-like classification scheme, such as in the MPI-IMDI archive, which in turn forms the main method of searching and querying the archive resources. Searching for information can be difficult for occasional users because effective use of these search-fields typically requires specialised knowledge. We assume that many non-specialist users of language resources will search by language name, language family, or geographic area, so that geographic navigation would offer a very powerful search method. We also assume that such users are familiar with maps, and that geographic browsing is more intuitive than browsing classification trees, so these users would prefer to start with a large scale map and then zoom in to find the data that interests them. Therefore, classification trees and geographic maps provide complementary methods for accessing language resources to meet the needs of different user groups. We selected Google Earth (GE) as a geographic browsing system and overlaid it with linguistic information. GE was chosen because it is available via the web, it has good navigation controls, it is familiar to many web users, and because the overlaid linguistic information can be formulated in XML, making it comparatively easy to interchange with other geographic systems.
  • Van der Werf, O. J., Schuhmann, T., De Graaf, T., Ten Oever, S., & Sack, A. T. (2023). Investigating the role of task relevance during rhythmic sampling of spatial locations. Scientific Reports, 13: 12707. doi:10.1038/s41598-023-38968-z.

    Abstract

    Recently it has been discovered that visuospatial attention operates rhythmically, rather than being stably employed over time. A low-frequency 7–8 Hz rhythmic mechanism coordinates periodic windows to sample relevant locations and to shift towards other, less relevant locations in a visual scene. Rhythmic sampling theories would predict that when two locations are relevant 8 Hz sampling mechanisms split into two, effectively resulting in a 4 Hz sampling frequency at each location. Therefore, it is expected that rhythmic sampling is influenced by the relative importance of locations for the task at hand. To test this, we employed an orienting task with an arrow cue, where participants were asked to respond to a target presented in one visual field. The cue-to-target interval was systematically varied, allowing us to assess whether performance follows a rhythmic pattern across cue-to-target delays. We manipulated a location’s task relevance by altering the validity of the cue, thereby predicting the correct location in 60%, 80% or 100% of trials. Results revealed significant 4 Hz performance fluctuations at cued right visual field targets with low cue validity (60%), suggesting regular sampling of both locations. With high cue validity (80%), we observed a peak at 8 Hz towards non-cued targets, although not significant. These results were in line with our hypothesis suggesting a goal-directed balancing of attentional sampling (cued location) and shifting (non-cued location) depending on the relevance of locations in a visual scene. However, considering the hemifield specificity of the effect together with the absence of expected effects for cued trials in the high valid conditions we further discuss the interpretation of the data.

    Additional information

    supplementary information
  • Van Geert, E., Ding, R., & Wagemans, J. (2024). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts. Advance online publication. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • van der Burght, C. L., Friederici, A. D., Maran, M., Papitto, G., Pyatigorskaya, E., Schroen, J., Trettenbrein, P., & Zaccarella, E. (2023). Cleaning up the brickyard: How theory and methodology shape experiments in cognitive neuroscience of language. Journal of Cognitive Neuroscience, 35(12), 2067-2088. doi:10.1162/jocn_a_02058.

    Abstract

    The capacity for language is a defining property of our species, yet despite decades of research evidence on its neural basis is still mixed and a generalized consensus is difficult to achieve. We suggest that this is partly caused by researchers defining “language” in different ways, with focus on a wide range of phenomena, properties, and levels of investigation. Accordingly, there is very little agreement amongst cognitive neuroscientists of language on the operationalization of fundamental concepts to be investigated in neuroscientific experiments. Here, we review chains of derivation in the cognitive neuroscience of language, focusing on how the hypothesis under consideration is defined by a combination of theoretical and methodological assumptions. We first attempt to disentangle the complex relationship between linguistics, psychology, and neuroscience in the field. Next, we focus on how conclusions that can be drawn from any experiment are inherently constrained by auxiliary assumptions, both theoretical and methodological, on which the validity of conclusions drawn rests. These issues are discussed in the context of classical experimental manipulations as well as study designs that employ novel approaches such as naturalistic stimuli and computational modelling. We conclude by proposing that a highly interdisciplinary field such as the cognitive neuroscience of language requires researchers to form explicit statements concerning the theoretical definitions, methodological choices, and other constraining factors involved in their work.
  • Van der Werff, J., Ravignani, A., & Jadoul, Y. (2024). thebeat: A Python package for working with rhythms and other temporal sequences. Behavior Research Methods, 56, 3725-3736. doi:10.3758/s13428-023-02334-8.

    Abstract

    thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study.
  • Verga, L., D’Este, G., Cassani, S., Leitner, C., Kotz, S. A., Ferini-Strambi, L., & Galbiati, A. (2023). Sleeping with time in mind? A literature review and a proposal for a screening questionnaire on self-awakening. PLoS One, 18(3): e0283221. doi:10.1371/journal.pone.0283221.

    Abstract

    Some people report being able to spontaneously “time” the end of their sleep. This ability to self-awaken challenges the idea of sleep as a passive cognitive state. Yet, current evidence on this phenomenon is limited, partly because of the varied definitions of self-awakening and experimental approaches used to study it. Here, we provide a review of the literature on self-awakening. Our aim is to i) contextualise the phenomenon, ii) propose an operating definition, and iii) summarise the scientific approaches used so far. The literature review identified 17 studies on self-awakening. Most of them adopted an objective sleep evaluation (76%), targeted nocturnal sleep (76%), and used a single criterion to define the success of awakening (82%); for most studies, this corresponded to awakening occurring in a time window of 30 minutes around the expected awakening time. Out of 715 total participants, 125 (17%) reported to be self-awakeners, with an average age of 23.24 years and a slight predominance of males compared to females. These results reveal self-awakening as a relatively rare phenomenon. To facilitate the study of self-awakening, and based on the results of the literature review, we propose a quick paper-and-pencil screening questionnaire for self-awakeners and provide an initial validation for it. Taken together, the combined results of the literature review and the proposed questionnaire help in characterising a theoretical framework for self-awakenings, while providing a useful tool and empirical suggestions for future experimental studies, which should ideally employ objective measurements.
  • Verga, L., Kotz, S. A., & Ravignani, A. (2023). The evolution of social timing. Physics of Life Reviews, 46, 131-151. doi:10.1016/j.plrev.2023.06.006.

    Abstract

    Sociality and timing are tightly interrelated in human interaction as seen in turn-taking or synchronised dance movements. Sociality and timing also show in communicative acts of other species that might be pleasurable, but also necessary for survival. Sociality and timing often co-occur, but their shared phylogenetic trajectory is unknown: How, when, and why did they become so tightly linked? Answering these questions is complicated by several constraints; these include the use of divergent operational definitions across fields and species, the focus on diverse mechanistic explanations (e.g., physiological, neural, or cognitive), and the frequent adoption of anthropocentric theories and methodologies in comparative research. These limitations hinder the development of an integrative framework on the evolutionary trajectory of social timing and make comparative studies not as fruitful as they could be. Here, we outline a theoretical and empirical framework to test contrasting hypotheses on the evolution of social timing with species-appropriate paradigms and consistent definitions. To facilitate future research, we introduce an initial set of representative species and empirical hypotheses. The proposed framework aims at building and contrasting evolutionary trees of social timing toward and beyond the crucial branch represented by our own lineage. Given the integration of cross-species and quantitative approaches, this research line might lead to an integrated empirical-theoretical paradigm and, as a long-term goal, explain why humans are such socially coordinated animals.
  • Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.

    Abstract

    Background

    The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).

    Methods

    We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.

    Results

    Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.

    Conclusions

    The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits.
  • Vernes, S. C., Spiteri, E., Nicod, J., Groszer, M., Taylor, J. M., Davies, K. E., Geschwind, D., & Fisher, S. E. (2007). High-throughput analysis of promoter occupancy reveals direct neural targets of FOXP2, a gene mutated in speech and language disorders. American Journal of Human Genetics, 81(6), 1232-1250. doi:10.1086/522238.

    Abstract

    We previously discovered that mutations of the human FOXP2 gene cause a monogenic communication disorder, primarily characterized by difficulties in learning to make coordinated sequences of articulatory gestures that underlie speech. Affected people have deficits in expressive and receptive linguistic processing and display structural and/or functional abnormalities in cortical and subcortical brain regions. FOXP2 provides a unique window into neural processes involved in speech and language. In particular, its role as a transcription factor gene offers powerful functional genomic routes for dissecting critical neurogenetic mechanisms. Here, we employ chromatin immunoprecipitation coupled with promoter microarrays (ChIP-chip) to successfully identify genomic sites that are directly bound by FOXP2 protein in native chromatin of human neuron-like cells. We focus on a subset of downstream targets identified by this approach, showing that altered FOXP2 levels yield significant changes in expression in our cell-based models and that FOXP2 binds in a specific manner to consensus sites within the relevant promoters. Moreover, we demonstrate significant quantitative differences in target expression in embryonic brains of mutant mice, mediated by specific in vivo Foxp2-chromatin interactions. This work represents the first identification and in vivo verification of neural targets regulated by FOXP2. Our data indicate that FOXP2 has dual functionality, acting to either repress or activate gene expression at occupied promoters. The identified targets suggest roles in modulating synaptic plasticity, neurodevelopment, neurotransmission, and axon guidance and represent novel entry points into in vivo pathways that may be disturbed in speech and language disorders.
  • Vessel, E. A., Pasqualette, L., Uran, C., Koldehoff, S., Bignardi, G., & Vinck, M. (2023). Self-relevance predicts the aesthetic appeal of real and synthetic artworks generated via neural style transfer. Psychological Science, 34(9), 1007-1023. doi:10.1177/09567976231188107.

    Abstract

    What determines the aesthetic appeal of artworks? Recent work suggests that aesthetic appeal can, to some extent, be predicted from a visual artwork’s image features. Yet a large fraction of variance in aesthetic ratings remains unexplained and may relate to individual preferences. We hypothesized that an artwork’s aesthetic appeal depends strongly on self-relevance. In a first study (N = 33 adults, online replication N = 208), rated aesthetic appeal for real artworks was positively predicted by rated self-relevance. In a second experiment (N = 45 online), we created synthetic, self-relevant artworks using deep neural networks that transferred the style of existing artworks to photographs. Style transfer was applied to self-relevant photographs selected to reflect participant-specific attributes such as autobiographical memories. Self-relevant, synthetic artworks were rated as more aesthetically appealing than matched control images, at a level similar to human-made artworks. Thus, self-relevance is a key determinant of aesthetic appeal, independent of artistic skill and image features.

    Additional information

    supplementary materials
  • Vingerhoets, G., Verhelst, H., Gerrits, R., Badcock, N., Bishop, D. V. M., Carey, D., Flindall, J., Grimshaw, G., Harris, L. J., Hausmann, M., Hirnstein, M., Jäncke, L., Joliot, M., Specht, K., Westerhausen, R., & LICI consortium (2023). Laterality indices consensus initiative (LICI): A Delphi expert survey report on recommendations to record, assess, and report asymmetry in human behavioural and brain research. Laterality, 28(2-3), 122-191. doi:10.1080/1357650X.2023.2199963.

    Abstract

    Laterality indices (LIs) quantify the left-right asymmetry of brain and behavioural variables and provide a measure that is statistically convenient and seemingly easy to interpret. Substantial variability in how structural and functional asymmetries are recorded, calculated, and reported, however, suggest little agreement on the conditions required for its valid assessment. The present study aimed for consensus on general aspects in this context of laterality research, and more specifically within a particular method or technique (i.e., dichotic listening, visual half-field technique, performance asymmetries, preference bias reports, electrophysiological recording, functional MRI, structural MRI, and functional transcranial Doppler sonography). Experts in laterality research were invited to participate in an online Delphi survey to evaluate consensus and stimulate discussion. In Round 0, 106 experts generated 453 statements on what they considered good practice in their field of expertise. Statements were organised into a 295-statement survey that the experts then were asked, in Round 1, to independently assess for importance and support, which further reduced the survey to 241 statements that were presented again to the experts in Round 2. Based on the Round 2 input, we present a set of critically reviewed key recommendations to record, assess, and report laterality research for various methods.

    Files private

    Request files
  • Vonk, W., & Cozijn, R. (2007). Psycholinguïstiek: Een kwantitatieve wetenschap. Tijdschrift voor Nederlandse Taal- en Letterkunde, 123, 55-69.
  • Vosse, T., & Kempen, G. (2000). Syntactic structure assembly in human parsing: A computational model based on competitive inhibition and a lexicalist grammar. Cognition, 75, 105-143.

    Abstract

    We present the design, implementation and simulation results of a psycholinguistic model of human syntactic processing that meets major empirical criteria. The parser operates in conjunction with a lexicalist grammar and is driven by syntactic information associated with heads of phrases. The dynamics of the model are based on competition by lateral inhibition ('competitive inhibition'). Input words activate lexical frames (i.e. elementary trees anchored to input words) in the mental lexicon, and a network of candidate 'unification links' is set up between frame nodes. These links represent tentative attachments that are graded rather than all-or-none. Candidate links that, due to grammatical or 'treehood' constraints, are incompatible, compete for inclusion in the final syntactic tree by sending each other inhibitory signals that reduce the competitor's attachment strength. The outcome of these local and simultaneous competitions is controlled by dynamic parameters, in particular by the Entry Activation and the Activation Decay rate of syntactic nodes, and by the Strength and Strength Build-up rate of Unification links. In case of a successful parse, a single syntactic tree is returned that covers the whole input string and consists of lexical frames connected by winning Unification links. Simulations are reported of a significant range of psycholinguistic parsing phenomena in both normal and aphasic speakers of English: (i) various effects of linguistic complexity (single versus double, center versus right-hand self-embeddings of relative clauses; the difference between relative clauses with subject and object extraction; the contrast between a complement clause embedded within a relative clause versus a relative clause embedded within a complement clause); (ii) effects of local and global ambiguity, and of word-class and syntactic ambiguity (including recency and length effects); (iii) certain difficulty-of-reanalysis effects (contrasts between local ambiguities that are easy to resolve versus ones that lead to serious garden-path effects); (iv) effects of agrammatism on parsing performance, in particular the performance of various groups of aphasic patients on several sentence types.
  • Wang, M., Shao, Z., Verdonschot, R. G., Chen, Y., & Schiller, N. O. (2023). Orthography influences spoken word production in blocked cyclic naming. Psychonomic Bulletin & Review, 30, 383-392. doi:10.3758/s13423-022-02123-y.

    Abstract

    Does the way a word is written influence its spoken production? Previous studies suggest that orthography is involved only when the orthographic representation is highly relevant during speaking (e.g., in reading-aloud tasks). To address this issue, we carried out two experiments using the blocked cyclic picture-naming paradigm. In both experiments, participants were asked to name pictures repeatedly in orthographically homogeneous or heterogeneous blocks. In the naming task, the written form was not shown; however, the radical of the first character overlapped between the four pictures in this block type. A facilitative orthographic effect was found when picture names shared part of their written forms, compared with the heterogeneous condition. This facilitative effect was independent of the position of orthographic overlap (i.e., the left, the lower, or the outer part of the character). These findings strongly suggest that orthography can influence speaking even when it is not highly relevant (i.e., during picture naming) and the orthographic effect is less likely to be attributed to strategic preparation.
  • Wang, M.-Y., Korbmacher, M., Eikeland, R., Craven, A. R., & Specht, K. (2024). The intra‐individual reliability of1H‐MRSmeasurement in the anterior cingulate cortex across 1 year. Human Brain Mapping, 45(1): e26531. doi:10.1002/hbm.26531.

    Abstract

    Magnetic resonance spectroscopy (MRS) is the primary method that can measure the levels of metabolites in the brain in vivo. To achieve its potential in clinical usage, the reliability of the measurement requires further articulation. Although there are many studies that investigate the reliability of gamma-aminobutyric acid (GABA), comparatively few studies have investigated the reliability of other brain metabolites, such as glutamate (Glu), N-acetyl-aspartate (NAA), creatine (Cr), phosphocreatine (PCr), or myo-inositol (mI), which all play a significant role in brain development and functions. In addition, previous studies which predominately used only two measurements (two data points) failed to provide the details of the time effect (e.g., time-of-day) on MRS measurement within subjects. Therefore, in this study, MRS data located in the anterior cingulate cortex (ACC) were repeatedly recorded across 1 year leading to at least 25 sessions for each subject with the aim of exploring the variability of other metabolites by using the index coefficient of variability (CV); the smaller the CV, the more reliable the measurements. We found that the metabolites of NAA, tNAA, and tCr showed the smallest CVs (between 1.43% and 4.90%), and the metabolites of Glu, Glx, mI, and tCho showed modest CVs (between 4.26% and 7.89%). Furthermore, we found that the concentration reference of the ratio to water results in smaller CVs compared to the ratio to tCr. In addition, we did not find any time-of-day effect on the MRS measurements. Collectively, the results of this study indicate that the MRS measurement is reasonably reliable in quantifying the levels of metabolites.

    Additional information

    tables and figures data
  • Wang, X., Jahagirdar, S., Bakker, W., Lute, C., Kemp, B., Knegsel, A. v., & Saccenti, E. (2024). Discrimination of Lipogenic or Glucogenic Diet Effects in Early-Lactation Dairy Cows Using Plasma Metabolite Abundances and Ratios in Combination with Machine Learning. Metabolites, 14(4): 230. doi:10.3390/metabo14040230.

    Abstract

    During early lactation, dairy cows have a negative energy balance since their energy demands exceed their energy intake: in this study, we aimed to investigate the association between diet and plasma metabolomics profiles and how these relate to energy unbalance of course in the early-lactation stage. Holstein-Friesian cows were randomly assigned to a glucogenic (n = 15) or lipogenic (n = 15) diet in early lactation. Blood was collected in week 2 and week 4 after calving. Plasma metabolite profiles were detected using liquid chromatography–mass spectrometry (LC-MS), and a total of 39 metabolites were identified. Two plasma metabolomic profiles were available every week for each cow. Metabolite abundance and metabolite ratios were used for the analysis using the XGboost algorithm to discriminate between diet treatment and lactation week. Using metabolite ratios resulted in better discrimination performance compared with the metabolite abundances in assigning cows to a lipogenic diet or a glucogenic diet. The quality of the discrimination of performance of lipogenic diet and glucogenic diet effects improved from 0.606 to 0.753 and from 0.696 to 0.842 in week 2 and week 4 (as measured by area under the curve, AUC), when the metabolite abundance ratios were used instead of abundances. The top discriminating ratios for diet were the ratio of arginine to tyrosine and the ratio of aspartic acid to valine in week 2 and week 4, respectively. For cows fed the lipogenic diet, choline and the ratio of creatinine to tryptophan were top features to discriminate cows in week 2 vs. week 4. For cows fed the glucogenic diet, methionine and the ratio of 4-hydroxyproline to choline were top features to discriminate dietary effects in week 2 or week 4. This study shows the added value of using metabolite abundance ratios to discriminate between lipogenic and glucogenic diet and lactation weeks in early-lactation cows when using metabolomics data. The application of this research will help to accurately regulate the nutrition of lactating dairy cows and promote sustainable agricultural development.
  • Wassenaar, M., & Hagoort, P. (2007). Thematic role assignment in patients with Broca's aphasia: Sentence-picture matching electrified. Neuropsychologia, 45(4), 716-740. doi:10.1016/j.neuropsychologia.2006.08.016.

    Abstract

    An event-related brain potential experiment was carried out to investigate on-line thematic role assignment during sentence–picture matching in patients with Broca's aphasia. Subjects were presented with a picture that was followed by an auditory sentence. The sentence either matched the picture or mismatched the visual information depicted. Sentences differed in complexity, and ranged from simple active semantically irreversible sentences to passive semantically reversible sentences. ERPs were recorded while subjects were engaged in sentence–picture matching. In addition, reaction time and accuracy were measured. Three groups of subjects were tested: Broca patients (N = 10), non-aphasic patients with a right hemisphere (RH) lesion (N = 8), and healthy aged-matched controls (N = 15). The results of this study showed that, in neurologically unimpaired individuals, thematic role assignment in the context of visual information was an immediate process. This in contrast to patients with Broca's aphasia who demonstrated no signs of on-line sensitivity to the picture–sentence mismatches. The syntactic contribution to the thematic role assignment process seemed to be diminished given the reduction and even absence of P600 effects. Nevertheless, Broca patients showed some off-line behavioral sensitivity to the sentence–picture mismatches. The long response latencies of Broca's aphasics make it likely that off-line response strategies were used.
  • Wesseldijk, L. W., Henechowicz, T. L., Baker, D. J., Bignardi, G., Karlsson, R., Gordon, R. L., Mosing, M. A., Ullén, F., & Fisher, S. E. (2024). Notes from Beethoven’s genome. Current Biology, 34(6), R233-R234. doi:10.1016/j.cub.2024.01.025.

    Abstract

    Rapid advances over the last decade in DNA sequencing and statistical genetics enable us to investigate the genomic makeup of individuals throughout history. In a recent notable study, Begg et al.1 used Ludwig van Beethoven’s hair strands for genome sequencing and explored genetic predispositions for some of his documented medical issues. Given that it was arguably Beethoven’s skills as a musician and composer that made him an iconic figure in Western culture, we here extend the approach and apply it to musicality. We use this as an example to illustrate the broader challenges of individual-level genetic predictions.

    Additional information

    supplemental information
  • Whelan, L., Dockery, A., Stephenson, K. A. J., Zhu, J., Kopčić, E., Post, I. J. M., Khan, M., Corradi, Z., Wynne, N., O’ Byrne, J. J., Duignan, E., Silvestri, G., Roosing, S., Cremers, F. P. M., Keegan, D. J., Kenna, P. F., & Farrar, G. J. (2023). Detailed analysis of an enriched deep intronic ABCA4 variant in Irish Stargardt disease patients. Scientific Reports, 13: 9380. doi:10.1038/s41598-023-35889-9.

    Abstract

    Over 15% of probands in a large cohort of more than 1500 inherited retinal degeneration patients present with a clinical diagnosis of Stargardt disease (STGD1), a recessive form of macular dystrophy caused by biallelic variants in the ABCA4 gene. Participants were clinically examined and underwent either target capture sequencing of the exons and some pathogenic intronic regions of ABCA4, sequencing of the entire ABCA4 gene or whole genome sequencing. ABCA4 c.4539 + 2028C > T, p.[= ,Arg1514Leufs*36] is a pathogenic deep intronic variant that results in a retina-specific 345-nucleotide pseudoexon inclusion. Through analysis of the Irish STGD1 cohort, 25 individuals across 18 pedigrees harbour ABCA4 c.4539 + 2028C > T and another pathogenic variant. This includes, to the best of our knowledge, the only two homozygous patients identified to date. This provides important evidence of variant pathogenicity for this deep intronic variant, highlighting the value of homozygotes for variant interpretation. 15 other heterozygous incidents of this variant in patients have been reported globally, indicating significant enrichment in the Irish population. We provide detailed genetic and clinical characterization of these patients, illustrating that ABCA4 c.4539 + 2028C > T is a variant of mild to intermediate severity. These results have important implications for unresolved STGD1 patients globally with approximately 10% of the population in some western countries claiming Irish heritage. This study exemplifies that detection and characterization of founder variants is a diagnostic imperative.

    Additional information

    supplemental material
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2007). When language meets action: The neural integration of gesture and speech. Cerebral Cortex, 17(10), 2322-2333. doi:10.1093/cercor/bhl141.

    Abstract

    Although generally studied in isolation, language and action often co-occur in everyday life. Here we investigated one particular form of simultaneous language and action, namely speech and gestures that speakers use in everyday communication. In a functional magnetic resonance imaging study, we identified the neural networks involved in the integration of semantic information from speech and gestures. Verbal and/or gestural content could be integrated easily or less easily with the content of the preceding part of speech. Premotor areas involved in action observation (Brodmann area [BA] 6) were found to be specifically modulated by action information "mismatching" to a language context. Importantly, an increase in integration load of both verbal and gestural information into prior speech context activated Broca's area and adjacent cortex (BA 45/47). A classical language area, Broca's area, is not only recruited for language-internal processing but also when action observation is integrated with speech. These findings provide direct evidence that action and language processing share a high-level neural integration system.
  • Willems, R. M., & Hagoort, P. (2007). Neural evidence for the interplay between language, gesture, and action: A review. Brain and Language, 101(3), 278-289. doi:10.1016/j.bandl.2007.03.004.

    Abstract

    Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neuroscience that examine the neural underpinnings of links between language and action. Topics include neurocognitive studies of motor representations of speech sounds, action-related language, sign language and co-speech gestures. It will be concluded that there is strong evidence on the interaction between speech and gestures in the brain. This interaction however shares general properties with other domains in which there is interplay between language and action.
  • Willems, R. M. (2007). The neural construction of a Tinkertoy [‘Journal club’ review]. The Journal of Neuroscience, 27, 1509-1510. doi:10.1523/JNEUROSCI.0005-07.2007.
  • Winter, B., Lupyan, G., Perry, L. K., Dingemanse, M., & Perlman, M. (2024). Iconicity ratings for 14,000+ English words. Behavior Research Methods, 56, 1640-1655. doi:10.3758/s13428-023-02112-6.

    Abstract

    Iconic words and signs are characterized by a perceived resemblance between aspects of their form and aspects of their meaning. For example, in English, iconic words include peep and crash, which mimic the sounds they denote, and wiggle and zigzag, which mimic motion. As a semiotic property of words and signs, iconicity has been demonstrated to play a role in word learning, language processing, and language evolution. This paper presents the results of a large-scale norming study for more than 14,000 English words conducted with over 1400 American English speakers. We demonstrate the utility of these ratings by replicating a number of existing findings showing that iconicity ratings are related to age of acquisition, sensory modality, semantic neighborhood density, structural markedness, and playfulness. We discuss possible use cases and limitations of the rating dataset, which is made publicly available.
  • Wolna, A., Szewczyk, J., Diaz, M., Domagalik, A., Szwed, M., & Wodniecka, Z. (2024). Domain-general and language-specific contributions to speech production in a second language: An fMRI study using functional localizers. Scientific Reports, 14: 57. doi:10.1038/s41598-023-49375-9.

    Abstract

    For bilinguals, speaking in a second language (L2) compared to the native language (L1) is usually more difficult. In this study we asked whether the difficulty in L2 production reflects increased demands imposed on domain-general or core language mechanisms. We compared the brain response to speech production in L1 and L2 within two functionally-defined networks in the brain: the Multiple Demand (MD) network and the language network. We found that speech production in L2 was linked to a widespread increase of brain activity in the domain-general MD network. The language network did not show a similarly robust differences in processing speech in the two languages, however, we found increased response to L2 production in the language-specific portion of the left inferior frontal gyrus (IFG). To further explore our results, we have looked at domain-general and language-specific response within the brain structures postulated to form a Bilingual Language Control (BLC) network. Within this network, we found a robust increase in response to L2 in the domain-general, but also in some language-specific voxels including in the left IFG. Our findings show that L2 production strongly engages domain-general mechanisms, but only affects language sensitive portions of the left IFG. These results put constraints on the current model of bilingual language control by precisely disentangling the domain-general and language-specific contributions to the difficulty in speech production in L2.

    Additional information

    supplementary materials
  • Wolna, A., Szewczyk, J., Diaz, M., Domagalik, A., Szwed, M., & Wodniecka, Z. (2024). Tracking components of bilingual language control in speech production: An fMRI study using functional localizers. Neurobiology of Language, 5(2), 315-340. doi:10.1162/nol_a_00128.

    Abstract

    When bilingual speakers switch back to speaking in their native language (L1) after having used their second language (L2), they often experience difficulty in retrieving words in their L1. This phenomenon is referred to as the L2 after-effect. We used the L2 after-effect as a lens to explore the neural bases of bilingual language control mechanisms. Our goal was twofold: first, to explore whether bilingual language control draws on domain-general or language-specific mechanisms; second, to investigate the precise mechanism(s) that drive the L2 after-effect. We used a precision fMRI approach based on functional localizers to measure the extent to which the brain activity that reflects the L2 after-effect overlaps with the language network (Fedorenko et al., 2010) and the domain-general multiple demand network (Duncan, 2010), as well as three task-specific networks that tap into interference resolution, lexical retrieval, and articulation. Forty-two Polish–English bilinguals participated in the study. Our results show that the L2 after-effect reflects increased engagement of domain-general but not language-specific resources. Furthermore, contrary to previously proposed interpretations, we did not find evidence that the effect reflects increased difficulty related to lexical access, articulation, and the resolution of lexical interference. We propose that difficulty of speech production in the picture naming paradigm—manifested as the L2 after-effect—reflects interference at a nonlinguistic level of task schemas or a general increase of cognitive control engagement during speech production in L1 after L2.

    Additional information

    supplementary materials
  • Womelsdorf, T., Schoffelen, J.-M., Oostenveld, R., Singer, W., Desimone, R., Engel, A. K., & Fries, P. (2007). Modulation of neuronal interactions through neuronal synchronization. Science, 316, 1609-1612. doi:10.1126/science.1139597.

    Abstract

    Brain processing depends on the interactions between neuronal groups. Those interactions are governed by the pattern of anatomical connections and by yet unknown mechanisms that modulate the effective strength of a given connection. We found that the mutual influence among neuronal groups depends on the phase relation between rhythmic activities within the groups. Phase relations supporting interactions between the groups preceded those interactions by a few milliseconds, consistent with a mechanistic role. These effects were specific in time, frequency, and space, and we therefore propose that the pattern of synchronization flexibly determines the pattern of neuronal interactions.
  • Zettersten, M., Cox, C., Bergmann, C., Tsui, A. S. M., Soderstrom, M., Mayor, J., Lundwall, R. A., Lewis, M., Kosie, J. E., Kartushina, N., Fusaroli, R., Frank, M. C., Byers-Heinlein, K., Black, A. K., & Mathur, M. B. (2024). Evidence for infant-directed speech preference is consistent across large-scale, multi-site replication and meta-analysis. Open Mind, 8, 439-461. doi:10.1162/opmi_a_00134.

    Abstract

    There is substantial evidence that infants prefer infant-directed speech (IDS) to adult-directed speech (ADS). The strongest evidence for this claim has come from two large-scale investigations: i) a community-augmented meta-analysis of published behavioral studies and ii) a large-scale multi-lab replication study. In this paper, we aim to improve our understanding of the IDS preference and its boundary conditions by combining and comparing these two data sources across key population and design characteristics of the underlying studies. Our analyses reveal that both the meta-analysis and multi-lab replication show moderate effect sizes (d ≈ 0.35 for each estimate) and that both of these effects persist when relevant study-level moderators are added to the models (i.e., experimental methods, infant ages, and native languages). However, while the overall effect size estimates were similar, the two sources diverged in the effects of key moderators: both infant age and experimental method predicted IDS preference in the multi-lab replication study, but showed no effect in the meta-analysis. These results demonstrate that the IDS preference generalizes across a variety of experimental conditions and sampling characteristics, while simultaneously identifying key differences in the empirical picture offered by each source individually and pinpointing areas where substantial uncertainty remains about the influence of theoretically central moderators on IDS preference. Overall, our results show how meta-analyses and multi-lab replications can be used in tandem to understand the robustness and generalizability of developmental phenomena.

    Additional information

    supplementary data link to preprint
  • Zhang, Y., Ding, R., Frassinelli, D., Tuomainen, J., Klavinskis-Whiting, S., & Vigliocco, G. (2023). The role of multimodal cues in second language comprehension. Scientific Reports, 13: 20824. doi:10.1038/s41598-023-47643-2.

    Abstract

    In face-to-face communication, multimodal cues such as prosody, gestures, and mouth movements can play a crucial role in language processing. While several studies have addressed how these cues contribute to native (L1) language processing, their impact on non-native (L2) comprehension is largely unknown. Comprehension of naturalistic language by L2 comprehenders may be supported by the presence of (at least some) multimodal cues, as these provide correlated and convergent information that may aid linguistic processing. However, it is also the case that multimodal cues may be less used by L2 comprehenders because linguistic processing is more demanding than for L1 comprehenders, leaving more limited resources for the processing of multimodal cues. In this study, we investigated how L2 comprehenders use multimodal cues in naturalistic stimuli (while participants watched videos of a speaker), as measured by electrophysiological responses (N400) to words, and whether there are differences between L1 and L2 comprehenders. We found that prosody, gestures, and informative mouth movements each reduced the N400 in L2, indexing easier comprehension. Nevertheless, L2 participants showed weaker effects for each cue compared to L1 comprehenders, with the exception of meaningful gestures and informative mouth movements. These results show that L2 comprehenders focus on specific multimodal cues – meaningful gestures that support meaningful interpretation and mouth movements that enhance the acoustic signal – while using multimodal cues to a lesser extent than L1 comprehenders overall.

    Additional information

    supplementary materials
  • Wu, S., Zhao, J., de Villiers, J., Liu, X. L., Rolfhus, E., Sun, X. N., Li, X. Y., Pan, H., Wang, H. W., Zhu, Q., Dong, Y. Y., Zhang, Y. T., & Jiang, F. (2023). Prevalence, co-occurring difficulties, and risk factors of developmental language disorder: First evidence for Mandarin-speaking children in a population-based study. The Lancet Regional Health - Western Pacific, 34: 100713. doi:10.1016/j.lanwpc.2023.100713.

    Abstract

    Background: Developmental language disorder (DLD) is a condition that significantly affects children's achievement but has been understudied. We aim to estimate the prevalence of DLD in Shanghai, compare the co-occurrence of difficulties between children with DLD and those with typical development (TD), and investigate the early risk factors for DLD.

    Methods: We estimated DLD prevalence using data from a population-based survey with a cluster random sampling design in Shanghai, China. A subsample of children (aged 5-6 years) received an onsite evaluation, and each child was categorized as TD or DLD. The proportions of children with socio-emotional behavior (SEB) difficulties, low non-verbal IQ (NVIQ), and poor school readiness were calculated among children with TD and DLD. We used multiple imputation to address the missing values of risk factors. Univariate and multivariate regression models adjusted with sampling weights were used to estimate the correlation of each risk factor with DLD.

    Findings: Of 1082 children who were approached for the onsite evaluation, 974 (90.0%) completed the language ability assessments, of whom 74 met the criteria for DLD, resulting in a prevalence of 8.5% (95% CI 6.3-11.5) when adjusted with sampling weights. Compared with TD children, children with DLD had higher rates of concurrent difficulties, including SEB (total difficulties score at-risk: 156 [17.3%] of 900 TD vs. 28 [37.8%] of 74 DLD, p < 0.0001), low NVIQ (3 [0.3%] of 900 TD vs. 8 [10.8%] of 74 DLD, p < 0.0001), and poor school readiness (71 [7.9%] of 900 TD vs. 13 [17.6%] of 74 DLD, p = 0.0040). After accounting for all other risk factors, a higher risk of DLD was associated with a lack of parent-child interaction diversity (adjusted odds ratio [aOR] = 3.08, 95% CI = 1.29-7.37; p = 0.012) and lower kindergarten levels (compared to demonstration and first level: third level (aOR = 6.15, 95% CI = 1.92-19.63; p = 0.0020)).

    Interpretation: The prevalence of DLD and its co-occurrence with other difficulties suggest the need for further attention. Family and kindergarten factors were found to contribute to DLD, suggesting that multi-sector coordinated efforts are needed to better identify and serve DLD populations at home, in schools, and in clinical settings.

    Funding: The study was supported by Shanghai Municipal Education Commission (No. 2022you1-2, D1502), the Innovative Research Team of High-level Local Universities in Shanghai (No. SHSMU-ZDCX20211900), Shanghai Municipal Health Commission (No.GWV-10.1-XK07), and the National Key Research and Development Program of China (No. 2022YFC2705201).
  • Zhou, H., Van der Ham, S., De Boer, B., Bogaerts, L., & Raviv, L. (2024). Modality and stimulus effects on distributional statistical learning: Sound vs. sight, time vs. space. Journal of Memory and Language, 138: 104531. doi:10.1016/j.jml.2024.104531.

    Abstract

    Statistical learning (SL) is postulated to play an important role in the process of language acquisition as well as in other cognitive functions. It was found to enable learning of various types of statistical patterns across different sensory modalities. However, few studies have distinguished distributional SL (DSL) from sequential and spatial SL, or examined DSL across modalities using comparable tasks. Considering the relevance of such findings to the nature of SL, the current study investigated the modality- and stimulus-specificity of DSL. Using a within-subject design we compared DSL performance in auditory and visual modalities. For each sensory modality, two stimulus types were used: linguistic versus non-linguistic auditory stimuli and temporal versus spatial visual stimuli. In each condition, participants were exposed to stimuli that varied in their length as they were drawn from two categories (short versus long). DSL was assessed using a categorization task and a production task. Results showed that learners’ performance was only correlated for tasks in the same sensory modality. Moreover, participants were better at categorizing the temporal signals in the auditory conditions than in the visual condition, where in turn an advantage of the spatial condition was observed. In the production task participants exaggerated signal length more for linguistic signals than non-linguistic signals. Together, these findings suggest that DSL is modality- and stimulus-sensitive.

    Additional information

    link to preprint
  • Ziegler, A., DeStefano, A. L., König, I. R., Bardel, C., Brinza, D., Bull, S., Cai, Z., Glaser, B., Jiang, W., Lee, K. E., Li, C. X., Li, J., Li, X., Majoram, P., Meng, Y., Nicodemus, K. K., Platt, A., Schwarz, D. F., Shi, W., Shugart, Y. Y. and 7 moreZiegler, A., DeStefano, A. L., König, I. R., Bardel, C., Brinza, D., Bull, S., Cai, Z., Glaser, B., Jiang, W., Lee, K. E., Li, C. X., Li, J., Li, X., Majoram, P., Meng, Y., Nicodemus, K. K., Platt, A., Schwarz, D. F., Shi, W., Shugart, Y. Y., Stassen, H. H., Sun, Y. V., Won, S., Wang, W., Wahba, G., Zagaar, U. A., & Zhao, Z. (2007). Data mining, neural nets, trees–problems 2 and 3 of Genetic Analysis Workshop 15. Genetic Epidemiology, 31(Suppl 1), S51-S60. doi:10.1002/gepi.20280.

    Abstract

    Genome-wide association studies using thousands to hundreds of thousands of single nucleotide polymorphism (SNP) markers and region-wide association studies using a dense panel of SNPs are already in use to identify disease susceptibility genes and to predict disease risk in individuals. Because these tasks become increasingly important, three different data sets were provided for the Genetic Analysis Workshop 15, thus allowing examination of various novel and existing data mining methods for both classification and identification of disease susceptibility genes, gene by gene or gene by environment interaction. The approach most often applied in this presentation group was random forests because of its simplicity, elegance, and robustness. It was used for prediction and for screening for interesting SNPs in a first step. The logistic tree with unbiased selection approach appeared to be an interesting alternative to efficiently select interesting SNPs. Machine learning, specifically ensemble methods, might be useful as pre-screening tools for large-scale association studies because they can be less prone to overfitting, can be less computer processor time intensive, can easily include pair-wise and higher-order interactions compared with standard statistical approaches and can also have a high capability for classification. However, improved implementations that are able to deal with hundreds of thousands of SNPs at a time are required.
  • Zioga, I., Zhou, Y. J., Weissbart, H., Martin, A. E., & Haegens, S. (2024). Alpha and beta oscillations differentially support word production in a rule-switching task. eNeuro, 11(4): ENEURO.0312-23.2024. doi:10.1523/ENEURO.0312-23.2024.

    Abstract

    Research into the role of brain oscillations in basic perceptual and cognitive functions has suggested that the alpha rhythm reflects functional inhibition while the beta rhythm reflects neural ensemble (re)activation. However, little is known regarding the generalization of these proposed fundamental operations to linguistic processes, such as speech comprehension and production. Here, we recorded magnetoencephalography in participants performing a novel rule-switching paradigm. Specifically, Dutch native speakers had to produce an alternative exemplar from the same category or a feature of a given target word embedded in spoken sentences (e.g., for the word “tuna”, an exemplar from the same category—“seafood”—would be “shrimp”, and a feature would be “pink”). A cue indicated the task rule—exemplar or feature—either before (pre-cue) or after (retro-cue) listening to the sentence. Alpha power during the working memory delay was lower for retro-cue compared with that for pre-cue in the left hemispheric language-related regions. Critically, alpha power negatively correlated with reaction times, suggestive of alpha facilitating task performance by regulating inhibition in regions linked to lexical retrieval. Furthermore, we observed a different spatiotemporal pattern of beta activity for exemplars versus features in the right temporoparietal regions, in line with the proposed role of beta in recruiting neural networks for the encoding of distinct categories. Overall, our study provides evidence for the generalizability of the role of alpha and beta oscillations from perceptual to more “complex, linguistic processes” and offers a novel task to investigate links between rule-switching, working memory, and word production.
  • Zioga, I., Weissbart, H., Lewis, A. G., Haegens, S., & Martin, A. E. (2023). Naturalistic spoken language comprehension is supported by alpha and beta oscillations. The Journal of Neuroscience, 43(20), 3718-3732. doi:10.1523/JNEUROSCI.1500-22.2023.

    Abstract

    Brain oscillations are prevalent in all species and are involved in numerous perceptual operations. α oscillations are thought to facilitate processing through the inhibition of task-irrelevant networks, while β oscillations are linked to the putative reactivation of content representations. Can the proposed functional role of α and β oscillations be generalized from low-level operations to higher-level cognitive processes? Here we address this question focusing on naturalistic spoken language comprehension. Twenty-two (18 female) Dutch native speakers listened to stories in Dutch and French while MEG was recorded. We used dependency parsing to identify three dependency states at each word: the number of (1) newly opened dependencies, (2) dependencies that remained open, and (3) resolved dependencies. We then constructed forward models to predict α and β power from the dependency features. Results showed that dependency features predict α and β power in language-related regions beyond low-level linguistic features. Left temporal, fundamental language regions are involved in language comprehension in α, while frontal and parietal, higher-order language regions, and motor regions are involved in β. Critically, α- and β-band dynamics seem to subserve language comprehension tapping into syntactic structure building and semantic composition by providing low-level mechanistic operations for inhibition and reactivation processes. Because of the temporal similarity of the α-β responses, their potential functional dissociation remains to be elucidated. Overall, this study sheds light on the role of α and β oscillations during naturalistic spoken language comprehension, providing evidence for the generalizability of these dynamics from perceptual to complex linguistic processes.
  • Zora, H., Wester, J. M., & Csépe, V. (2023). Predictions about prosody facilitate lexical access: Evidence from P50/N100 and MMN components. International Journal of Psychophysiology, 194: 112262. doi:10.1016/j.ijpsycho.2023.112262.

    Abstract

    Research into the neural foundation of perception asserts a model where top-down predictions modulate the bottom-up processing of sensory input. Despite becoming increasingly influential in cognitive neuroscience, the precise account of this predictive coding framework remains debated. In this study, we aim to contribute to this debate by investigating how predictions about prosody facilitate speech perception, and to shed light especially on lexical access influenced by simultaneous predictions in different domains, inter alia, prosodic and semantic. Using a passive auditory oddball paradigm, we examined neural responses to prosodic changes, leading to a semantic change as in Dutch nouns canon [ˈkaːnɔn] ‘cannon’ vs kanon [kaːˈnɔn] ‘canon’, and used acoustically identical pseudowords as controls. Results from twenty-eight native speakers of Dutch (age range 18–32 years) indicated an enhanced P50/N100 complex to prosodic change in pseudowords as well as an MMN response to both words and pseudowords. The enhanced P50/N100 response to pseudowords is claimed to indicate that all relevant auditory information is still processed by the brain, whereas the reduced response to words might reflect the suppression of information that has already been encoded. The MMN response to pseudowords and words, on the other hand, is best justified by the unification of previously established prosodic representations with sensory and semantic input respectively. This pattern of results is in line with the predictive coding framework acting on multiple levels and is of crucial importance to indicate that predictions about linguistic prosodic information are utilized by the brain as early as 50 ms.

Share this page