Displaying 101 - 200 of 9689
-
Huettig, F., & Hulstijn, J. (2025). The Enhanced Literate Mind Hypothesis. Topics in Cognitive Science, 17(4), 909-918. doi:10.1111/tops.12731.
Abstract
In the present paper we describe the Enhanced Literate Mind (ELM) hypothesis. As individuals learn to read and write, they are, from then on, exposed to extensive written-language input and become literate. We propose that acquisition and proficient processing of written language (‘literacy’) leads to, both, increased language knowledge as well as enhanced language and non-language (perceptual and cognitive) skills. We also suggest that all neurotypical native language users, including illiterate, low literate, and high literate individuals, share a Basic Language Cognition (BLC) in the domain of oral informal language. Finally, we discuss the possibility that the acquisition of ELM leads to some degree of ‘knowledge parallelism’ between BLC and ELM in literate language users, which has implications for empirical research on individual and situational differences in spoken language processing. -
Hustá, C., Meyer, A. S., & Drijvers, L. (2025). Using Rapid Invisible Frequency Tagging (RIFT) to probe the neural interaction between representations of speech planning and comprehension. Neurobiology of Language, 6: nol_a_00171. doi:10.1162/nol_a_00171.
Abstract
Interlocutors often use the semantics of comprehended speech to inform the semantics of planned speech. Do representations of the comprehension and planning stimuli interact? In this EEG study, we used rapid invisible frequency tagging (RIFT) to better understand the attentional distribution to representations of comprehension and speech planning stimuli, and how they interact in the neural signal. To do this, we leveraged the picture-word interference (PWI) paradigm with delayed naming, where participants simultaneously comprehend auditory distractors (auditory [f1]; tagged at 54 Hz) while preparing to name related or unrelated target pictures (visual [f2]; tagged at 68 Hz). RIFT elicits steady-state evoked potentials, which reflect allocation of attention to the tagged stimuli. When representations of the tagged stimuli interact, increased power has been observed at the intermodulation frequency resulting from an interaction of the base frequencies (f2 ± f1; Drijvers et al., 2021). Our results showed clear power increases at 54 Hz and 68 Hz during the tagging window, but no power difference between the related and unrelated condition. Interestingly, we observed a larger power difference in the intermodulation frequency (compared to baseline) in the unrelated compared to the related condition (68 Hz − 54 Hz: 14 Hz), indicating stronger interaction between unrelated auditory and visual representations. Our results go beyond standard PWI results by showing that participants’ difficulties in the related condition do not arise from allocating attention to the pictures or distractors. Instead, processing difficulties arise during interaction of the concepts or lemmas invoked by the two stimuli, thus, we conclude, that interaction might be downregulated in the related condition.Additional information
data and analysis scripts -
Hustá, C., & Meyer, A. S. (2025). Capturing the attentional trade-off between speech planning and comprehension. Journal of Cognitive Neuroscience. Advance online publication. doi:10.1162/JOCN.a.97.
Abstract
In conversation, future speakers often plan speech simultaneously with comprehension, which means that they must divide attentional resources between these processes. In this EEG study, we used responses to linguistic attention probes (i.e., syllable “BA” presented during spoken sentences) to track temporal variations in attention to comprehension. Participants were asked to listen to prerecorded sentences with expected or unexpected sentence-final words. Each sentence was presented twice, once with and once without the attention probe starting 100 msec after the target word onset. Participants saw a picture 50 msec before the target word. Depending on the test block (picture naming or button press), participants either named the picture or pressed the space bar, both after an 850-msec delay. The probes elicited a negative potential approximately 100 msec after probe onset (i.e., an attention probe effect) in all probe conditions. Unexpectedly, neither word expectancy nor speech planning influenced the timing or strength of the attention probe effect. This indicates that expectancy of words in Dutch does not affect the allocation of attention toward these words 100 msec after their onset (i.e., the time of the probe presentation). Interestingly, engaging in speech planning does not seem to divert attentional resources away from comprehension at the moment of probe presentation. These findings imply that listeners are able to effectively distribute their attentional resources between comprehension and speech planning and carry out these processes at the same time. Considering these unexpected findings, using attention probes might not be the best approach to capture variations in temporal attention in dual-task paradigms. -
Jadoul, Y., Hersh, T. A., Fernández Domingos, E., Gamba, M., Favaro, L., & Ravignani, A. (2025). An evolutionary model of rhythmic accelerando in animal vocal signalling. PLOS Computational Biology, 21(4): e1013011. doi:10.1371/journal.pcbi.1013011.
Abstract
Animal acoustic communication contains many structural features. Among these, temporal structure, or rhythmicity, is increasingly tested empirically and modelled quantitatively. Accelerando is a rhythmic structure which consists of temporal intervals increasing in rate over a sequence. Why this particular vocal behaviour is widespread in many different animal lineages, and how it evolved, is so far unknown. Here, we use evolutionary game theory and computer simulations to link two rhythmic aspects of animal communication, acceleration and overlap: We test whether rhythmic accelerando could evolve under a pressure for acoustic overlap in time. Our models show that higher acceleration values result in a higher payoff, driven by the higher relative overlap between sequences. The addition of a cost to the payoff matrix models a physiological disadvantage to high acceleration rates and introduces a divergence between an individual’s incentive and the overall payoff of the population. Analysis of the invasion dynamics of acceleration strategies shows a stable, non-invadable range of strategies for moderate acceleration levels. Our computational simulations confirm these results: A simple selective pressure to maximise the expected overlap, while minimising the associated physiological cost, causes an initially isochronous population to evolve towards producing increasingly accelerating sequences until a population-wide equilibrium of rhythmic accelerando is reached. These results are robust to a broad range of parameter values. Overall, our analyses show that if overlap is beneficial, emergent evolutionary dynamics allow a population to gradually start producing accelerating sequences and reach a stable state of moderate acceleration. Finally, our modelling results closely match empirical data recorded from an avian species showing rhythmic accelerando, the African penguin. This shows the productive interplay between theoretical and empirical biology. -
Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2025). Child heritage speakers’ reading skills in the majority language and exposure to the heritage language support morphosyntactic prediction in speech. Bilingualism: Language and Cognition. Advance online publication. doi:10.1017/S1366728925000331.
Abstract
We examined the morphosyntactic prediction ability of child heritage speakers and the role of reading skills and language experience in predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in Turkish with monolingual (N=49, Mage=83 months) and heritage children, who were early bilinguals of Turkish and Dutch (N=30, Mage=90 months). We found quantitative differences in magnitude of the prediction ability of monolingual and heritage children; however, their overall prediction ability was on par. The heritage speakers’ prediction ability was facilitated by their reading skills in Dutch, but not in Turkish as well as by their heritage language exposure, but not by engagement in literacy activities. These findings emphasize the facilitatory role of reading skills and spoken language experience in predictive processing. This study is the first to show that in a developing bilingual mind, effects of reading-on-prediction can take place across modalities and across languages.Additional information
data and analysis scripts -
Karaca, F. (2025). On knowing what lies ahead: The interplay of prediction, experience, and proficiency. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
link to Radboud Repository -
Karadöller*, D. Z., Sümer*, B., & Özyürek, A. (2025). Advancing the multimodal language acquisition framework through collaborative dialogue. First Language. Advance online publication. doi:10.1177/01427237251379276.
Abstract
*Joint first authorship.
Language acquisition unfolds within inherently multimodal contexts, where communication is expressed and perceived through diverse channels embedded in social interactions. For hearing children, this involves integrating speech with gesture; for deaf children, language develops through fully visual modalities. Such observations necessitate a paradigm shift from speech-centric models to a holistic framework that equally values all modalities, whether in spoken or signed languages. This framework must account not only for the multimodal scaffolding of input and interaction but also for individual and contextual diversity, including the cultural and cognitive variabilities children bring to language learning contexts. Responding to commentaries on our target article, this paper refines and expands the multimodal language framework, emphasizing its capacity to integrate the interactive richness of input and the heterogeneous contexts and individual variations shaping language acquisition. -
Karadöller, D. Z., Demir-Lira, Ö. E., & Göksun, T. (2025). Full-term children with lower vocabulary scores receive more multimodal math input than preterm children. Journal of Cognition and Development, 26(4), 630-650. doi:10.1080/15248372.2025.2470245.
Abstract
One of the earliest sources of mathematical input arises in dyadic parent–child interactions. However, the emphasis has been on parental input only in speech and how input varies across different environmental and child-specific factors remains largely unexplored. Here, we investigated the relationship among parental math input modality and type, children’s gestational status (being preterm vs. full-term born), and vocabulary development. Using book-reading as a medium for parental math input in dyadic interaction, we coded specific math input elicited by Turkish-speaking parents and their 26-month-old children (N = 58, 24 preterms) for speech-only and multimodal (speech and gestures combined) input. Results showed that multimodal math input, as opposed to speech-only math input, was uniquely associated with gestational status, expressive vocabulary, and the interaction between the two. Full-term children with lower expressive vocabulary scores received more multimodal input compared to their preterm peers. However, there was no association between expressive vocabulary and multimodal math input for preterm children. Moreover, cardinality was the most frequent type for both speech-only and multimodal input. These findings suggest that the specific type of multimodal math input can be produced as a function of children’s gestational status and vocabulary development. -
Kekes-Szabo, S., Clough, S., Brown-Schmidt, S., & Duff, M. C. (2025). Multiparty communication: A new direction in characterizing the impact of traumatic brain injury on social communication. American Journal of Speech-Language Pathology, 34(S3), 1896-1909. doi:10.1044/2025_AJSLP-24-00151.
Abstract
Purpose: The purpose of this viewpoint is to advocate for increased study of 3654244brcommon ground and audience design processes in multiparty communication in traumatic brain injury (TBI).
Method: Building on discussions at the 2024 International Cognitive-Communication Disorders Conference, we review common ground and audience design processes in dyadic and multiparty communication. We discuss how the diffuse profiles of neural and cognitive deficits place individuals with TBI at increased risk for keeping track of who knows what in group settings and using that knowledge to flexibly adapt their communication behaviors.
Results: We routinely engage in social communication in groups of three or more people at work, school, and social functions. While academic, vocational, and interpersonal domains are all areas where individuals with TBI are at risk for negative outcomes, we know very little about the impact of TBI on group, or multiparty, communication.
Conclusions: The empirical study of common ground and audience design in multiparty communication in TBI presents a promising new direction in characterizing the impact of TBI on social communication, uncovering the underlying mechanisms of cognitive-communication disorders, and may lead to new inter-ventions aimed at improving success in navigating group communication at work and school, and in interpersonal relationships. -
Kempe, V., & Raviv, L. (2025). No evidence for generational differences in the conventionalisation of face emojis. Computers in Human Behavior Reports, 19: 100750. doi:10.1016/j.chbr.2025.100750.
Abstract
Despite strong popular beliefs that older users misunderstand emojis, empirical evidence is equivocal. Here we propose that different generations of users may vary in the degree of intra-generational agreement on emoji meanings (i.e., how much people from the same generation agree on what an emoji means). Inspired by research in cultural evolution demonstrating a positive association between social network size and the conventionalisation of signs, we hypothesised that younger users would show stronger agreement on emoji meanings because they tend to be embedded in larger online social networks than older users. We examined generational differences in intra-generational agreement on emoji interpretations, taking into account variability arising from different emoji renderings across platforms. In a pre-registered online study, 394 respondents from the culturally defined generations of GenZ (n = 152, age 13–24 years), Millennials (n = 149, age 25–40 years), and GenX/BabyBoomers (n = 93, age 41–76 years) produced three words to describe the meanings of 24 target face emojis and 10 popular filler emojis. Frequentist and Bayesian analyses showed no generational differences in intra-generational response entropy and in the probability of selecting the most frequent meaning within one's generation. Exploratory analysis further showed that the most commonly provided emoji interpretations did not differ across generations, despite generational differences in social media usage patterns. Together, these findings suggest that different generations not only interpret face emojis in similar ways, but also show similar intra-generational agreement on emoji meanings, consistent with the idea that, after a decade of use, face emojis have become a widely conventionalised semiotic system accessible to digital media users regardless of age. -
Kidd, E., Garrido Rodriguez, G., Wilmoth, S., Garrido Guillén, J. E., & Nordlinger, R. (2025). How does speaking a free word order language influence sentence planning and production? Evidence from Pitjantjatjara (Pama‐Nyungan, Australia). Cognitive Science, 49(7): e70087. doi:10.1111/cogs.70087.
Abstract
Sentence production is a stage-like process of mapping a conceptual representation to the linear speech signal via grammatical rules. While the typological diversity of languages is vast and thus must necessarily influence sentence production, psycholinguistic studies of diverse languages are comparatively rare. Here, we present data from a sentence planning and production study in Pitjantjatjara, an Australian Indigenous language that has highly flexible word order. Forty-nine (N = 49) native speakers described pictures of two-participant scenes while their eye-movements were recorded. Participants produced all possible orders of agent, patient, and verb. There was a general preference to produce agent-initial orders, but word order was influenced by the semantic properties of agent and patient referents (± human). Analyses of participants’ eye-movements revealed early relational encoding of the entire event, whereby speakers distributed their attention between agent and patient referents in a manner that is different than typically observed in languages that have more restricted word order options. Relational encoding was influenced by the word order that participants eventually produced. The results provide evidence to suggest that sentence planning in Pitjantjatjara is a hierarchical process, in which early relational encoding creates a wholistic conceptualization of an event, possibly driven by pressure to decide upon one of many possible word orders.Additional information
data and materials -
Knudsen, L., Guo, F., Sharoh, D., Huang, J., Blicher, J. U., Lund, T. E., Zhou, Y., Zhang, P., & Yang, Y. (2025). The laminar pattern of proprioceptive activation in human primary motor cortex. Cerebral Cortex, 35(4): bhaf076. doi:10.1093/cercor/bhaf076.
Abstract
The primary motor cortex (M1) is increasingly being recognized for its vital role in proprioceptive somatosensation. However, our current understanding of proprioceptive processing at the laminar scale is limited. Empirical findings in primates and rodents suggest a pronounced role of superficial cortical layers, but the involvement of deep layers has yet to be examined in humans. Submillimeter resolution functional magnetic resonance imaging (fMRI) has emerged in recent years, paving the way for studying layer-dependent activity in humans (laminar fMRI). In the present study, laminar fMRI was employed to investigate the influence of proprioceptive somatosensation on M1 deep layer activation using passive finger movements. Significant M1 deep layer activation was observed in response to proprioceptive stimulation across 10 healthy subjects using a vascular space occupancy (VASO)-sequence at 7 T. For further validation, two additional datasets were included which were obtained using a balanced steady-state free precession sequence with ultrahigh (0.3 mm) in-plane resolution, yielding converging results. These results were interpreted in the light of previous laminar fMRI studies and the active inference account of motor control. We propose that a considerable proportion of M1 deep layer activation is due to proprioceptive influence and that deep layers of M1 constitute a key component in proprioceptive circuits.Additional information
supplementary materials -
Korbmacher, M., Vidal‐Pineiro, D., Wang, M.-Y., Van der Meer, D., Wolfers, T., Nakua, H., Eikefjord, E., Andreassen, O. A., Westlye, L. T., & Maximov, I. I. (2025). Cross‐sectional brain age assessments are limited in predicting future brain change. Human Brain Mapping, 46(6): e70203. doi:10.1002/hbm.70203.
Abstract
The concept of brain age (BA) describes an integrative imaging marker of brain health, often suggested to reflect aging processes. However, the degree to which cross-sectional MRI features, including BA, reflect past, ongoing, and future brain changes across different tissue types from macro- to microstructure remains controversial. Here, we use multimodal imaging data of 39,325 UK Biobank participants, aged 44–82 years at baseline and 2,520 follow-ups within 1.12–6.90 years to examine BA changes and their relationship to anatomical brain changes. We find insufficient evidence to conclude that BA reflects the rate of brain aging. However, modality-specific differences in brain ages reflect the state of the brain, highlighting diffusion and multimodal MRI brain age as potentially useful cross-sectional markers. -
Korbmacher, M., Tranfa, M., Pontillo, G., Van der Meer, D., Wang, M.-Y., Andreassen, O. A., Westlye, L. T., & Maximov, I. I. (2025). White matter microstructure links with brain, bodily and genetic attributes in adolescence, mid- and late life. NeuroImage, 310: 121132. doi:10.1016/j.neuroimage.2025.121132.
Abstract
Advanced diffusion magnetic resonance imaging (dMRI) allows one to probe and assess brain white matter (WM) organisation and microstructure in vivo. Various dMRI models with different theoretical and practical assumptions have been developed, representing partly overlapping characteristics of the underlying brain biology with potentially complementary value in the cognitive and clinical neurosciences. To which degree the different dMRI metrics relate to clinically relevant geno- and phenotypes is still debated. Hence, we investigate how tract-based and whole WM skeleton parameters from different dMRI approaches associate with clinically relevant and white matter-related phenotypes (sex, age, pulse pressure (PP), body-mass-index (BMI), brain asymmetry) and genetic markers in the UK Biobank (UKB, n=52,140) and the Adolescent Brain Cognitive Development (ABCD) Study (n=5,844). In general, none of the imaging approaches could explain all examined phenotypes, though the approaches were overall similar in explaining variability of the examined phenotypes. Nevertheless, particular diffusion parameters of the used dMRI approaches stood out in explaining some important phenotypes known to correlate with general human health outcomes. A multi-compartment Bayesian dMRI approach provided the strongest WM associations with age, and together with diffusion tensor imaging, the largest accuracy for sex-classifications. We find a similar pattern of metric and tract-dependent asymmetries across datasets, with stronger asymmetries in ABCD data. The magnitude of WM associations with polygenic scores as well as PP depended more on the sample, and likely age, than dMRI metrics. However, kurtosis was most indicative of BMI and potentially of bipolar disorder polygenic scores. We conclude that WM microstructure is differentially associated with clinically relevant pheno- and genotypes at different points in life. -
Kram, L., Neu, B., Ohlerth, A.-K., Schroeder, A., Meyer, B., Krieg, S. M., & Ille, S. (2025). The impact of linguistic complexity on feasibility and reliability of language mapping in aphasic glioma patients. Brain and Language, 262: 105534. doi:10.1016/j.bandl.2025.105534.
Abstract
Background
Reliable language mappings require sufficient language skills. This study evaluated whether linguistic task properties impact feasibility and reliability of navigated transcranial magnetic stimulation (nTMS)-based language mappings in aphasic glioma patients.
Methods
The effect of linguistic complexity on naming accuracy during baseline testing without stimulation and on the number of errors during nTMS was evaluated for 16 moderately and 4 severely expressive aphasic patients.
Result
During baseline, items acquired later in life and used less frequently, a higher amount of multisyllabic, compound, and inanimate items were named inaccurately. Even after removing these more complex items, less frequent and multisyllabic items were more error-prone during stimulation.
Conclusion
Higher linguistic item complexity was associated with decreased naming accuracy during baseline and resulted in a potentially higher false positive rate during nTMS in aphasic glioma patients. Thus, tailoring task complexity to individual performance capabilities may considerably support the preservation of residual functionality. -
Kumarage, S., Malko, A., & Kidd, E. (2025). Indexing prediction error during syntactic priming via pupillometry. Language, Cognition and Neuroscience, 40(7), 930-950. doi:10.1080/23273798.2025.2506634.
Abstract
Prediction is argued to be a key feature of human cognition, including in syntactic processing. Prediction error has been linked to dynamic changes in syntactic representations in theoretical models of language processing. This mechanism is termed error-based learning. Evidence from syntactic priming research supports error-based learning accounts; however, measuring prediction error itself has not been a research focus. Here we present a study exploring the use of pupillometry as a measure of prediction error during syntactic priming. We found a larger pupil response to the more complex and less expected passive structure. In addition, the pupil response predicted priming while being weakly dependent on changes in expectations over the experiment. We conclude that the pupil response is not only sensitive to syntactic complexity in comprehension, but there is some evidence that its magnitude is related to the adjustment of dynamic mental representations for syntax that lead to syntactic priming. -
Levinson, S. C. (2025). The interaction engine: Language in social life and human evolution. Cambridge: Cambridge University Press.
-
Levinson, S. C. (2025). The interaction engine. In M. C. Frank, & A. Majid (
Eds. ), Open Encyclopedia of Cognitive Science. Cambridge: MIT Press. doi:10.21428/e2759450.e3df24b2.Abstract
The interaction engine is a label for the specialized capacities involved in human social interaction. The interaction engine comprises a suite of specific abilities that enable communication, for example, the capacity for rapid conversational turn-taking using multimodal signals, with systematic contingency between initiating signal and response—in which that contingency is often inference based rather than conventional in character. The properties of informal interaction subsumed within the broader “interaction engine” label appear to be largely universal, in contrast to the diversity of languages and many other aspects of social behavior. They seem to provide a platform that makes it possible for children to acquire their particular language and culture.Additional information
link to Open Encyclopedia of Cognitive Science -
Lokhesh, N. N., Swaminathan, K., Shravan, G., Menon, D., Mishra, S., Nandanwar, A., & Mishra, C. (2025). Welcome to the library: Integrating social robots in Indian libraries. In O. Palinko, L. Bodenhagen, J.-J. Cabibihan, K. Fischer, S. Šabanović, K. Winkle, L. Behera, S. S. Ge, D. Chrysostomou, W. Jiang, & H. He (
Eds. ), Social Robotics: 16th International Conference, ICSR + AI 2024, Odense, Denmark, October 23–26, 2024, Proceedings (pp. 239-246). Singapore: Springer. doi:10.1007/978-981-96-3525-2_20.Abstract
Libraries are very often considered the hallway to developing knowledge. However, the lack of adequate staff within Indian libraries makes catering to the visitors’ needs difficult. Previous systems that have sought to address libraries’ needs through automation have mostly been limited to storage and fetching aspects while lacking in their interaction aspect. We propose to address this issue by incorporating social robots within Indian libraries that can communicate and address the visitors’ queries in a multi-modal fashion attempting to make the experience more natural and appealing while helping reduce the burden on the librarians. In this paper, we propose and deploy a Furhat robot as a robot librarian by programming it on certain core librarian functionalities. We evaluate our system with a physical robot librarian (N = 26). The results show that the robot librarian was found to be very informative and overall left with a positive impression and preference. -
Mai, A., & Martin, A. E. (2025). Linguistic structure as a guiding principle for human neuroscience. Neuroscience & Biobehavioral Reviews, 177: 106322. doi:10.1016/j.neubiorev.2025.106322.
-
Mamus, E., Speed, L. J., Ortega, G., Majid, A., & Özyürek, A. (2025). Gestural and verbal evidence of conceptual representation differences in blind and sighted individuals. Cognitive Science, 49: 10. doi:10.1111/cogs.70125.
Abstract
This preregistered study examined whether visual experience influences conceptual representations by examining both gestural expression and feature listing. Gestures—mostly driven by analog mappings of visuospatial and motoric experiences onto the body—offer a unique window into conceptual representations and provide complementary information not offered by language-based features, which have been the focus of previous work. Thirty congenitally or early blind and 30 sighted Turkish speakers produced silent gestures and features for concepts from semantic categories that differentially rely on experience in visual (non-manipulable objects and animals) and motor (manipulable objects) information. Blind individuals were less likely than sighted individuals to produce gestures for non-manipulable objects and animals, but not for manipulable objects. Overall, the tendency to use a particular gesture strategy for specific semantic categories was similar across groups. However, blind participants relied less on drawing and personification strategies depicting visuospatial aspects of concepts than sighted participants. Feature-listing revealed that blind participants share considerable conceptual knowledge with sighted participants, but their understanding differs in fine-grained details, particularly for animals. Thus, while concepts appear broadly similar in blind and sighted individuals, this study reveals nuanced differences, too, highlighting the intricate role of visual experience in conceptual representations. -
Mangnus, M., Koch, S. B. J., Cai, K., Greidanus Romaneli, M., Hagoort, P., Bašnáková, J., & Stolk, A. (2025). Preserved spontaneous mentalizing amid reduced intersubject variability in autism during a movie narrative. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 10, 1057-1066. doi:10.1016/j.bpsc.2024.10.007.
Abstract
Background
While individuals with autism often face challenges in everyday social interactions, they may demonstrate proficiency in structured theory of mind (ToM) tasks that assess their ability to infer others’ mental states. Using functional magnetic resonance imaging and pupillometry, we investigated whether these discrepancies stem from diminished spontaneous mentalizing or broader difficulties in unstructured contexts.
Methods
Fifty-two adults diagnosed with autism and 52 neurotypical control participants viewed the animated short Partly Cloudy, a nonverbal animated film with a dynamic social narrative known to engage the ToM brain network during specific scenes. Analysis focused on comparing brain and pupil responses to these ToM events. Additionally, dynamic intersubject correlations were used to explore the variability of these responses throughout the film.
Results
Both groups showed similar brain and pupil responses to ToM events and provided comparable descriptions of the characters’ mental states. However, participants with autism exhibited significantly stronger correlations in their responses across the film’s social narrative, indicating reduced interindividual variability. This distinct pattern emerged well before any ToM events and involved brain regions beyond the ToM network.
Conclusions
Our findings provide functional evidence of spontaneous mentalizing in autism, demonstrating this capacity in a context that affords but does not require mentalizing. Rather than responses to ToM events, a novel neurocognitive signature—interindividual variability in brain and pupil responses to evolving social narratives—differentiated neurotypical individuals from individuals with autism. These results suggest that idiosyncratic narrative processing in unstructured settings, a common element of everyday social interactions, may offer a more sensitive scenario for understanding the autistic mind. -
Matetovici, M., Spruit, A., Colonnesi, C., Garnier‐Villarreal, M., & Noom, M. (2025). Parent and child gender effects in the relationship between attachment and both internalizing and externalizing problems of children between 2 and 5 years old: A dyadic perspective. Infant Mental Health Journal: Infancy and Early Childhood, 46(4), 424-444. doi:10.1002/imhj.70002.
Abstract
Acknowledging that the parent–child attachment is a dyadic relationship, we investigated differences between pairs of parents and preschool children based on gender configurations in the association between attachment and problem behavior. We looked at mother–daughter, mother–son, father–daughter, and father–son dyads, but also compared mothers and fathers, daughters and sons, and same versus different gender pairs. We employed multigroup structural equation modeling to explore moderation effects of gender in a sample of 446 independent pairs of parents and preschool children (2–5 years old) from the Netherlands. A stronger association between both secure and avoidant attachment and internalizing problems was found for father–son dyads compared to father–daughter dyads. A stronger association between both secure and avoidant attachment and externalizing problems was found for mother–son dyads compared to mother–daughter and father–daughter dyads. Sons showed a stronger negative association between secure attachment and externalizing problems, a stronger positive association between avoidant attachment and externalizing problems, and a stronger negative association between secure attachment and internalizing problems compared to daughters. These results provide evidence for gender moderation and demonstrate that a dyadic approach can reveal patterns of associations that would not be recognized if parent and child gender effects were assessed separately.Additional information
analysis code -
Mazzi, G., Ferrari, A., Mencaroni, M. L., Valzolgher, C., Tommasini, M., Pavani, F., & Benetti, S. (2025). Prior expectations guide multisensory integration during face-to-face communication. PLOS Computational Biology, 21: e1013468. doi:10.1371/journal.pcbi.1013468.
Abstract
Face-to-face communication relies on the seamless integration of multisensory signals, including voice, gaze, and head movements, to convey meaning effectively. This poses a fundamental computational challenge: optimally binding signals sharing the same communicative intention (e.g., looking at the addressee while speaking) and segregating unrelated signals (e.g., looking away while coughing), all within the rapid turn-taking dynamics of conversation. Critically, the computational mechanisms underlying this extraordinary feat remain largely unknown. Here, we cast face-to-face communication as a Bayesian Causal Inference problem to formally test whether prior expectations arbitrate between the integration and segregation of vocal and bodily signals. Specifically, we asked whether there is a stronger prior tendency to integrate audiovisual signals that convey the same communicative intention, thus establishing a crossmodal pragmatic correspondence. Additionally, we evaluated whether observers solve causal inference by adopting optimal Bayesian decision strategies or non-optimal approximate heuristics. In a spatial localization task, participants watched audiovisual clips of a speaker where the audio (voice) and the video (bodily cues) were sampled either from congruent positions or at increasing spatial disparities. Crucially, we manipulated the pragmatic correspondence of the signals: in a communicative condition, the speaker addressed the participant with their head, gaze and speech; in a non-communicative condition, the speaker kept the head down and produced a meaningless vocalization. We measured audiovisual integration through the ventriloquist effect, which quantifies how much the perceived audio position is misplaced towards the video position. Combining psychophysics with computational modelling, we show that observers solved audiovisual causal inference using non-optimal heuristics that nevertheless approximate optimal Bayesian inference with high accuracy. Remarkably, participants showed a stronger tendency to integrate vocal and bodily information when signals conveyed congruent communicative intent, suggesting that pragmatic correspondences enhance multisensory integration. Collectively, our findings provide novel and compelling evidence that face-to-face communication is shaped by deeply ingrained expectations about how multisensory signals should be structured and interpreted.Additional information
supporting information -
Mazzini*, S., Seijdel*, N., & Drijvers*, L. (2025). Autistic individuals benefit from gestures during degraded speech comprehension. Autism, 29(2), 544-548. doi:10.1177/13623613241286570.
Abstract
*All authors contributed equally to this work
Meaningful gestures enhance degraded speech comprehension in neurotypical adults, but it is unknown whether this is the case for neurodivergent populations, such as autistic individuals. Previous research demonstrated atypical multisensory and speech-gesture integration in autistic individuals, suggesting that integrating speech and gestures may be more challenging and less beneficial for speech comprehension in adverse listening conditions in comparison to neurotypicals. Conversely, autistic individuals could also benefit from additional cues to comprehend speech in noise, as they encounter difficulties in filtering relevant information from noise. We here investigated whether gestural enhancement of degraded speech comprehension differs for neurotypical (n = 40, mean age = 24.1) compared to autistic (n = 40, mean age = 26.8) adults. Participants watched videos of an actress uttering a Dutch action verb in clear or degraded speech accompanied with or without a gesture, and completed a free-recall task. Gestural enhancement was observed for both autistic and neurotypical individuals, and did not differ between groups. In contrast to previous literature, our results demonstrate that autistic individuals do benefit from gestures during degraded speech comprehension, similar to neurotypicals. These findings provide relevant insights to improve communication practices with autistic individuals and to develop new interventions for speech comprehension. -
Mazzini, S. (2025). Intra- and inter-brain synchrony dynamics during task-oriented face-to-face dialogue. PhD Thesis, Radboud University Nijmegen, Nijmegen.
-
McConnell, K., Hintz, F., & Meyer, A. S. (2025). Individual differences in online research: Comparing lab-based and online administration of a psycholinguistic battery of linguistic and domain-general skills. Behavior Research Methods, 57: 22. doi:10.3758/s13428-024-02533-x.
Abstract
Experimental psychologists and psycholinguists increasingly turn to online research for data collection due to the ease of sampling many diverse participants in parallel. Online research has shown promising validity and consistency, but is it suitable for all paradigms? Specifically, is it reliable enough for individual differences research? The current paper reports performance on 15 tasks from a psycholinguistic individual differences battery, including timed and untimed assessments of linguistic abilities, as well as domain-general skills. From a demographically homogenous sample of young Dutch people, 149 participants participated in the lab study, and 515 participated online. Our results indicate that there is no reason to assume that participants tested online will underperform compared to lab-based testing, though they highlight the importance of motivation and the potential for external help (e.g., through looking up answers) online. Overall, we conclude that there is reason for optimism in the future of online research into individual differences. -
McLean, B., & Dingemanse, M. (2025). A multi-methods toolkit for documentary research on ideophones. In J. P. Williams (
Ed. ), Capturing Expressivity: Contexts, Methods, and Techniques for Linguistic Research (pp. 74-107). Oxford: Oxford University Press. doi:10.1093/oso/9780192858931.003.0005.Abstract
As lexicalized depictions, ideophones (also known as expressives or mimetics) differ fundamentally from other words both in the kinds of meanings they represent and the ways in which they represent them. This can make them difficult to capture using traditional methods for language description and documentation. We review some of the new and experimental techniques that have been used to elicit, describe, and analyse ideophones, and discuss how these can be used to address some of the unique challenges ideophones pose. They include stimulus-based elicitation; multimodal folk definitions; hybrid modes of analysis (combining images and text); and new ways of compiling and presenting multimodal ideophone corpora. We also review psycholinguistic methods for exploring the sensory properties of words and the organisation of the lexicon, such as sensory ratings and similarity judgment tasks, and discuss how these can contribute to our understanding of ideophone lexicons. Crucial to our approach is the combination of insights from multiple sources, the exploitation of polysemiotic resources (combining multiple modes of meaning making), and the integration of etic and emic perspectives. The discussion is structured around three key challenges: collecting ideophones, unravelling their slippery semantics, and representing them in ways that do justice to their special semiotic properties. The days when ideophones were just footnotes in grammars are long past. With more and more researchers working to document ideophones in languages around the world, and increasing interest in iconicity from across the language sciences, now is an excellent time to rethink the toolkit of documentary linguistics to make sure it can optimally deal with language in all its semiotic diversity. -
Ye, C., McQueen, J. M., & Bosker, H. R. (2025). A gradient effect of hand beat timing on spoken word recognition. In Proceedings of Interspeech 2025 (pp. 3793-3797). doi:10.21437/Interspeech.2025-116.
Abstract
Visual cues play a key role in speech perception. Beat gestures (i.e., simple up-and-down hand movements) usually co-occur with prominence in speech. Previous studies found that hand beat timing can indicate word stress. The present study further examines whether hand beat timing influences spoken word recognition in a gradient fashion. On watching videos of a native speaker of Dutch uttering a disyllabic word voornaam while making a hand beat, 40 participants needed to decide if they heard the word with initial (VOORnaam, "first name") or final stress (voorNAAM, "respectable"). Crucially, nine beat apex timings were equally distributed between the pitch peaks of the two syllables. Results exhibited a gradient effect of hand beat timing on stress perception, which appeared not to be susceptible to brief pretest feedback implying that visual cues should be ignored. Our findings provide novel evidence for audiovisual interaction and can inform gesture generation in conversational agents. -
Ye, C., McQueen, J. M., & Bosker, H. R. (2025). Effect of auditory cues to lexical stress on the visual perception of gestural timing. Attention, Perception & Psychophysics, 87, 2207-2222. doi:10.3758/s13414-025-03072-z.
Abstract
Speech is often accompanied by gestures. Since beat gestures—simple nonreferential up-and-down hand movements—frequently co-occur with prosodic prominence, they can indicate stress in a word and hence influence spoken-word recognition. However, little is known about the reverse influence of auditory speech on visual perception. The current study investigated whether lexical stress has an effect on the perceived timing of hand beats. We used videos in which a disyllabic word, embedded in a carrier sentence (Experiment 1) or in isolation (Experiment 2), was coupled with an up-and-down hand beat, while varying their degrees of asynchrony. Results from Experiment 1, a novel beat timing estimation task, revealed that gestures were estimated to occur closer in time to the pitch peak in a stressed syllable than their actual timing, hence reducing the perceived temporal distance between gestures and stress by around 60%. Using a forced-choice task, Experiment 2 further demonstrated that listeners tended to perceive a gesture, falling midway between two syllables, on the syllable receiving stronger cues to stress than the other, and this auditory effect was greater when gestural timing was most ambiguous. Our findings suggest that f0 and intensity are the driving force behind the temporal attraction effect of stress on perceived gestural timing. This study provides new evidence for auditory influences on visual perception, supporting bidirectionality in audiovisual interaction between speech-related signals that occur in everyday face-to-face communication. -
Mishra, C., Skantze, G., Hagoort, P., & Verdonschot, R. G. (2025). Perception of emotions in human and robot faces: Is the eye region enough? In O. Palinko, L. Bodenhagen, J.-J. Cabihihan, K. Fischer, S. Šabanović, K. Winkle, L. Behera, S. S. Ge, D. Chrysostomou, W. Jiang, & H. He (
Eds. ), Social Robotics: 116th International Conference, ICSR + AI 2024, Odense, Denmark, October 23–26, 2024, Proceedings (pp. 290-303). Singapore: Springer.Abstract
The increased interest in developing next-gen social robots has raised questions about the factors affecting the perception of robot emotions. This study investigates the impact of robot appearances (human-like, mechanical) and face regions (full-face, eye-region) on human perception of robot emotions. A between-subjects user study (N = 305) was conducted where participants were asked to identify the emotions being displayed in videos of robot faces, as well as a human baseline. Our findings reveal three important insights for effective social robot face design in Human-Robot Interaction (HRI): Firstly, robots equipped with a back-projected, fully animated face – regardless of whether they are more human-like or more mechanical-looking – demonstrate a capacity for emotional expression comparable to that of humans. Secondly, the recognition accuracy of emotional expressions in both humans and robots declines when only the eye region is visible. Lastly, within the constraint of only the eye region being visible, robots with more human-like features significantly enhance emotion recognition. -
Mitchell, Z. H., Den Hoed, J., Claassen, W., Demurtas, M., Deelen, L., Campeau, P. M., Liu, K., Fisher, S. E., & Trizzino, M. (2025). The NuRD component CHD3 promotes BMP signalling during cranial neural crest cell specification. EMBO Reports, 26(19), 4723-4741. doi:10.1038/s44319-025-00555-w.
Abstract
Pathogenic genetic variants in the NuRD component CHD3 cause Snijders Blok–Campeau Syndrome, a neurodevelopmental disorder manifesting with intellectual disability and craniofacial anomalies. To investigate the role of CHD3 in craniofacial development, we differentiated control and CHD3-depleted human-induced pluripotent stem cells into cranial neural crest cells (CNCCs). In control lines, CHD3 is upregulated in early stages of CNCC specification, where it enhances the BMP signalling response by opening chromatin at BMP-responsive cis-regulatory elements and by increasing expression of BMP-responsive transcription factors, including DLX paralogs. CHD3 loss leads to repression of BMP target genes and loss of chromatin accessibility at cis-regulatory elements usually bound by BMP-responsive factors, causing an imbalance between BMP and Wnt signalling. Consequently, the CNCC specification fails, replaced by aberrant early-mesoderm identity, which can be partially rescued by titrating Wnt levels. Our findings highlight a novel role for CHD3 as a pivotal regulator of BMP signalling, essential for proper neural crest specification and craniofacial development. Moreover, these results suggest a molecular mechanism for the craniofacial anomalies of Snijders Blok–Campeau Syndrome. -
Monen, J., Shkaravska, O., Withers, P., Weustink, J., Van den Heuvel, M., Trilsbeek, P., Dirksmeyer, R., Meyer, A. S., & Hintz, F. (2025). Timing precision of the Individual Differences in Dutch Language Skills (IDLaS-NL) test battery. Frontiers in Human Neuroscience, 19: 1625756. doi:10.3389/fnhum.2025.1625756.
Abstract
Online experimentation has become an essential tool in cognitive psychology, offering access to diverse participant samples. However, remote testing introduces variability in stimulus presentation and response timing due to differences in participant hardware, browsers, and internet conditions. To ensure the validity of online studies, it is crucial to assess the timing precision of experimental software. The present study evaluates the Individual Differences in Dutch Language Skills (IDLaS-NL) test battery, a collection of online tests designed to measure linguistic experience, domain-general cognitive skills, and linguistic processing. Implemented using Frinex, a programming environment developed at the Max Planck Institute for Psycholinguistics, IDLaS-NL allows researchers to customize test selections via a web platform. We conducted two studies to assess the timing precision of five chronometric tests within the battery. In Study 1, we evaluated the initial implementation of the tests, analyzing differences between expected and recorded stimulus presentation times, response latencies, and recording delays using the custom-made Web Experiment Analyzer (WEA). The results indicated imprecisions in some measures, particularly for reaction time and audio recording onset. Visual stimulus presentation, on the other hand, was fairly accurate. Study 2 introduced refined timing mechanisms in Frinex, incorporating specialized triggers for stimulus presentation and response registration. These adjustments improved timing precision, especially for speech production tasks. Overall, our findings confirm that Frinex achieves timing precision comparable to other widely used experimental platforms. While some variability in stimulus presentation and response timing is inherent to online testing, the results provide researchers with useful estimates of expected precision levels when using Frinex. This study contributes to the growing body of research on online testing methodologies by offering empirical insights into timing accuracy in web-based experiments.Additional information
supplementary materials 1 supplementary materials 2 supplementary materials 3 -
Mooijman, S., Schoonen, R., Goral, M., Roelofs, A., & Ruiter, M. B. (2025). Why do bilingual speakers with aphasia alternate between languages? A study into their experiences and mixing patterns. Aphasiology. Advance online publication. doi:10.1080/02687038.2025.2452928.
Abstract
Background
The factors that contribute to language alternation by bilingual speakers with aphasia have been debated. Some studies suggest that atypical language mixing results from impairments in language control, while others posit that mixing is a way to enhance communicative effectiveness. To address this question, most prior research examined the appropriateness of language mixing in connected speech tasks.
Aims
The goal of this study was to provide new insight into the question whether language mixing in aphasia reflects a strategy to enhance verbal effectiveness or involuntary behaviour resulting from impaired language control.
Methods & procedures
Semi-structured web-based interviews with bilingual speakers with aphasia (N = 19) with varying language backgrounds were conducted. The interviews were transcribed and coded for: (1) Self-reports regarding language control and compensation, (2) instances of language mixing, and (3) in two cases, instances of repair initiation.
Outcomes & results
The results showed that several participants reported language control difficulties but that the knowledge of additional languages could also be recruited to compensate for lexical retrieval problems. Most participants showed no or very few instances of mixing and the observed mixes appeared to adhere to the pragmatic context and known functions of switching. Three participants exhibited more marked switching behaviour and reported corresponding difficulties with language control. Instances of atypical mixing did not coincide with clear problems initiating conversational repair.
Conclusions
Our study highlights the variability in language mixing patterns of bilingual speakers with aphasia. Furthermore, most of the individuals in the study appeared to be able to effectively control their languages, and to alternate between their languages for compensatory purposes. Control deficits resulting in atypical language mixing were observed in a small number of participants. -
Morales, A. E., Dong, Y., Brown, T., Baid, K., Kontopoulos, D.-.-G., Gonzalez, V., Huang, Z., Ahmed, A.-W., Bhuinya, A., Hilgers, L., Winkler, S., Hughes, G., Li, X., Lu, P., Yang, Y., Kirilenko, B. M., Devanna, P., Lama, T. M., Nissan, Y., Pippel, M. Morales, A. E., Dong, Y., Brown, T., Baid, K., Kontopoulos, D.-.-G., Gonzalez, V., Huang, Z., Ahmed, A.-W., Bhuinya, A., Hilgers, L., Winkler, S., Hughes, G., Li, X., Lu, P., Yang, Y., Kirilenko, B. M., Devanna, P., Lama, T. M., Nissan, Y., Pippel, M., Dávalos, L. M., Vernes, S. C., Puechmaille, S. J., Rossiter, S. J., Yovel, Y., Prescott, J. B., Kurth, A., Ray, D. A., Lim, B. K., Myers, E., Teeling, E. C., Banerjee, A., Irving, A. T., & Hiller, M. (2025). Bat genomes illuminate adaptations to viral tolerance and disease resistance. Nature, 638, 449-458. doi:10.1038/s41586-024-08471-0.
Abstract
Zoonoses are infectious diseases transmitted from animals to humans. Bats have been suggested to harbour more zoonotic viruses than any other mammalian order1. Infections in bats are largely asymptomatic2,3, indicating limited tissue-damaging inflammation and immunopathology. To investigate the genomic basis of disease resistance, the Bat1K project generated reference-quality genomes of ten bat species, including potential viral reservoirs. Here we describe a systematic analysis covering 115 mammalian genomes that revealed that signatures of selection in immune genes are more prevalent in bats than in other mammalian orders. We found an excess of immune gene adaptations in the ancestral chiropteran branch and in many descending bat lineages, highlighting viral entry and detection factors, and regulators of antiviral and inflammatory responses. ISG15, which is an antiviral gene contributing to hyperinflammation during COVID-19 (refs. 4,5), exhibits key residue changes in rhinolophid and hipposiderid bats. Cellular infection experiments show species-specific antiviral differences and an essential role of protein conjugation in antiviral function of bat ISG15, separate from its role in secretion and inflammation in humans. Furthermore, in contrast to humans, ISG15 in most rhinolophid and hipposiderid bats has strong anti-SARS-CoV-2 activity. Our work reveals molecular mechanisms that contribute to viral tolerance and disease resistance in bats.Additional information
supplementary information -
Morano, L. (2025). The learning of reduced forms in a second language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Mountford, H. S., Eising, E., Fontanillas, P., Auton, A., 23andMe Research Team, Irving-Pease, E. K., Doust, C., Bates, T. C., Martin, N. G., Fisher, S. E., & Luciano, M. (2025). Multivariate genome-wide association analysis of dyslexia and quantitative reading skill improves gene discovery. Translational Psychiatry, 15: 289. doi:10.1038/s41398-025-03514-0.
Abstract
The ability to read is an important life skill and a major route to education. Dyslexia, characterized by difficulties with accurate/ fluent word reading, and poor spelling is influenced by genetic variation, with a twin study heritability estimate of 0.4–0.6. Until recently, genomic investigations were limited by modest sample size. We used a multivariate genome-wide association study (GWAS) method, MTAG, to leverage summary statistics from two independent GWAS efforts, boosting power for analyses of dyslexia; the GenLang meta-analysis of word reading (N = 27,180) and the 23andMe, Inc., study of dyslexia (Ncases = 51,800, Ncontrols = 1,087,070). We increased the effective sample size to 1,228,832 participants, representing the largest genetic study of reading-related phenotypes to date. Our analyses identified 80 independent genome-wide significant loci, including 36 regions which were not previously reported as significant. Of these 36 loci, 13 were novel regions with no prior association with dyslexia. We observed clear genetic correlations with cognitive and educational measures. Gene-set analyses revealed significant enrichment of dyslexia-associated genes in four neuronal biological process pathways, and findings were further supported by enrichment of neuronally expressed genes in the developing embryonic brain. Polygenic index analysis of our multivariate results predicted between 2.34–4.73% of variance in reading traits in an independent sample, the National Child Development Study cohort (N = 6410). Polygenic adaptation was examined using a large panel of ancient genomes spanning the last ~15 k years. We did not find evidence of selection, suggesting that dyslexia has not been subject to recent selection pressure in Europeans. By combining existing datasets to improve statistical power, these results provide novel insights into the biology of dyslexia. -
Muhinyi, A., Stewart, A. J., & Rowland, C. F. (2025). Encouraging use of complex language in preschoolers: A classroom-based storybook intervention study. Language Learning and Development, 21(4), 399-417. doi:10.1080/15475441.2024.2443447.
Abstract
Preschoolers’ exposure to abstract language (i.e. talk beyond the here and now) during shared reading is associated with language development. This randomized intervention study tested whether preschoolers’ repeated exposure to simple and complex stories (as defined by the inferential demands of the story), and the extratextual talk associated with such stories, would lead to differences in language production during shared reading and to differential gains in vocabulary and narrative skills post intervention. An experimenter read scripted stories to 34 children (3;07–4;11) assigned to one of two story conditions (simple or complex) in small-groups, twice weekly over six weeks. Results showed that children in the complex story condition produced more complex language (as indexed by their mean length of utterance, use of mental and communication verbs, and use of subordinate clauses). However, post-intervention, children’s vocabulary and narrative skills did not differ between conditions. Specific kinds of stories and corresponding extratextual talk by adults may not only increase children’s exposure to rich and challenging input from the extratextual talk, but can also provide valuable opportunities for children to produce complex language. Theoretical and methodological implications are also discussed. -
Muir, M. T., Noll, K., Prinsloo, S., Michener, H., Traylor, J. I., Kumar, V. A., Ene, C. I., Ferguson, S., Liu, H.-L., Weinberg, J. S., Lang, F., Taylor, B. A., Forkel, S. J., & Prabhu, S. S. (2025). Preoperative brain mapping predicts language outcomes after eloquent tumor resection. Human Brain Mapping, 46(15): e70340. doi:10.1002%2Fhbm.70340.
Abstract
When operating on gliomas near critical language regions, surgeons risk either leaving residual tumor or inducing permanent postoperative language deficits (PLDs). Despite the advent of intraoperative mapping techniques, subjective judgments frequently determine important surgical decisions. We aim to inform data-driven surgery by constructing a non-invasive mapping approach that quantitatively predicts the impact of individual surgical decisions on long-term language function. This study included 79consecutive patients undergoing resection of language-eloquent gliomas. Patients underwent preoperative navigated transcranial magnetic stimulation (TMS) language mapping to identify language-positive sites (“TMS points”) and their associated white matter tracts (“TMS tracts”) as well as formal language evaluations pre-and postoperatively. The resection of regions identified by preoperative mapping was correlated with permanent postoperative language deficits (PLDs). Resected tract segments (RTS) were normalized to MNI space for comparison with normative data. The resection of TMS points did not predict PLDs. However, a TMS point subgroup defined by white matter connectivity significantly predicted PLDs (OR = 8.74, p < 0.01) and demonstrated a canonical distribution of cortical language sites at a group level. TMS tracts recapitulated normative patterns of white matter connectivity defined by the Human Connectome Project. Subcortical resection of TMS tracts predicted PLDs independently of cortical resection (OR = 60, p < 0.001). In patients with PLDs, RTS showed significantly stronger co-localization with normative language-associated tracts compared to RTS in patients without PLDs (p < 0.05). Resecting patient-specific co-localizations between TMS tracts and normative tracts in native space predicted PLDs with an accuracy of 94% (OR = 134, p < 0.001). Prospective application of this data in a patient with glioblastoma precisely predicted the results of intraoperative language mapping with direct subcortical stimulation. Long-term postoperative language deficits result from resecting patient-specific white matter segments. We integrate these findings into a personalized tool that uses TMS language mappings, diffusion tractography, and population-level connectivity to preoperatively predict the long-term linguistic impact of individual surgical decisions.Additional information
link to preprint -
Müller, T. F., & Raviv, L. (2025). Communication experiments: Social interaction in the formation of novel communication systems. In L. Raviv, & C. Boeckx (
Eds. ), The Oxford handbook of approaches to language evolution (pp. 41-62). Oxford: Oxford University Press.Abstract
By studying communicative interactions between humans, we can investigate the basic processes underlying the evolution of language, including how humans manage to communicate in the first place, how they form novel conventions, how they create grammatical structure, and subsequent changes to their conventions and grammar. Communication experiments, which involve interactions between two or more human participants in artificial settings, are a useful method for addressing these questions within a controlled environment. These experiments can help researchers with teasing apart the effects of different variables on the emergence of language, which are typically confounded in naturalistic settings. In this chapter, we first briefly review the history of communication paradigms. We then summarize the procedures, designs, and typical measures that characterize communication experiments. Finally, we discuss the theoretical limitations and methodological challenges of using such paradigms and propose some ways forward. -
Nayak, S., Ladanyi, E., Eising, E., Mekki, Y., Nitin, R., Bush, C. T., Gustavson, D. E., Anglada-Tort, M., Lancaster, H. S., Mosing, M. A., Ullén, F., Magne, C. L., Fisher, S. E., Jacoby, N., & Gordon, R. L. (2025). Musical rhythm abilities and risk for developmental speech-language problems and disorders: Epidemiological and polygenic associations. Nature Communications, 16: 8355. doi:10.1038/s41467-025-60867-2.
Abstract
Impaired musical rhythm abilities and developmental speech-language related disorders are biologically and clinically intertwined. Prior work examining their relationship has primarily used small samples; here, we studied associations at population-scale by conducting the largest systematic epidemiological investigation to date (total N = 39,358). Based on existing theoretical frameworks, we predicted that rhythm impairment would be a significant risk factor for speech-language disorders in the general adult population. Findings were consistent across multiple independent datasets and rhythm subskills (including beat synchronization and rhythm discrimination), and aggregate meta-analyzed data showed that non-linguistic rhythm impairment is a modest but consistent risk factor for developmental speech, language, and reading disorders (OR = 1.33 [1.18 – 1.49]; p < .0001). Further, cross-trait polygenic score analyses (total N = 7180) indicated shared genetic architecture between musical rhythm and reading abilities, suggesting genetic pleiotropy between musicality and language-related phenotypes. -
Norris, D., & McQueen, J. M. (2025). Why might there be lexical-prelexical feedback in speech recognition? Cognition, 255: 106025. doi:10.1016/j.cognition.2024.106025.
Abstract
In reply to Magnuson, Crinnion, Luthra, Gaston, and Grubb (2023), we challenge their conclusion that on-line activation feedback improves word recognition. This type of feedback is instantiated in the TRACE model (McClelland & Elman, 1986) as top-down spread of activation from lexical to phoneme nodes. We give two main reasons why Magnuson et al.'s demonstration that activation feedback speeds up word recognition in TRACE is not informative about whether activation feedback helps humans recognize words. First, the same speed-up could be achieved by changing other parameters in TRACE. Second, more fundamentally, there is room for improvement in TRACE's performance only because the model, unlike Bayesian models, is suboptimal. We also challenge Magnuson et al.'s claim that the available empirical data support activation feedback. The data they base this claim on are open to alternative explanations and there are data against activation feedback that they do not discuss. We argue, therefore, that there are no computational or empirical grounds to conclude that activation feedback benefits human spoken-word recognition and indeed no theoretical grounds why activation feedback would exist. Other types of feedback, for example feedback to support perceptual learning, likely do exist, precisely because they can help listeners recognize words. -
Ohlerth, A.-K., Lavrador, J. P., Vergani, F., & Forkel, S. J. (2025). Combining nTMS and tractography for language mapping: An integrated paradigm for neurosurgical planning. In S. M. Krieg, & T. Picht (
Eds. ), Navigated Transcranial Magnetic Stimulation in Neurosurgery (pp. 185-213). Berlin: Springer. doi:10.1007/978-3-031-97155-6_10.Abstract
Recent decades have substantiated the understanding that the function of language relies on an elaborate network of not only cortical but also subcortical structures of the human brain. Therefore, efforts have been made in the neurosurgical field to delineate and preserve the synergy of both cortical language hubs and subcortical white matter tracts on a patient-tailored basis. Preferably, this mapping of function is achieved during the preoperative phase, thereby aiding meticulous presurgical planning. In this chapter, we present the techniques enabling this preoperative functional delineation of language: the combination of cortical stimulation mapping with navigated transcranial magnetic stimulation (nTMS) and tractography of white matter bundles for language. We commence in the first part with a stepwise description of nTMS language mapping and an overview of tractography approaches through region of interest placement and the merging of the two methods. In the second part, we depict the applicability of this combined approach by outlining the theory of language and the anatomy behind the intricate language system that these methods aim to preserve. Lastly, we present an illustrative case in order to depict the implementation in the individual. Through discussing the foundational principles of tractography and the cutting-edge applications of nTMS in language mapping, this chapter elucidates how these technologies can maximize surgical outcomes while preserving human language capacity. -
Özer, D., Özyürek, A., & Göksun, T. (2025). Spatial working memory is critical for gesture processing: Evidence from gestures with varying semantic links to speech. Psychonomic Bulletin & Review, 32, 1639-1653. doi:10.3758/s13423-025-02642-4.
Abstract
Gestures express redundant or complementary information to speech they accompany by depicting visual and spatial features of referents. In doing so, they recruit both spatial and verbal cognitive resources that underpin the processing of visual semantic information and its integration with speech. The relation between spatial and verbal skills and gesture comprehension, where gestures may serve different roles in relation to speech is yet to be explored. This study examined the role of spatial and verbal skills in processing gestures that expressed redundant or complementary information to speech during the comprehension of spatial relations between objects. Turkish-speaking adults (N=74) watched videos describing the spatial location of objects that involved perspective-taking (left-right) or not (on-under) with speech and gesture. Gestures either conveyed redundant information to speech (e.g., saying and gesturing “left”) or complemented the accompanying demonstrative in speech (e.g., saying “here,” gesturing “left”). We also measured participants’ spatial (the Corsi block span and the mental rotation tasks) and verbal skills (the digit span task). Our results revealed nuanced interactions between these skills and spatial language comprehension, depending on the modality in which the information was expressed. One insight emerged prominently. Spatial skills, particularly spatial working memory capacity, were related to enhanced comprehension of visual semantic information conveyed through gestures especially when this information was not present in the accompanying speech. This study highlights the critical role of spatial working memory in gesture processing and underscores the importance of examining the interplay among cognitive and contextual factors to understand the complex dynamics of multimodal language. -
Ozker, M., & Hagoort, P. (2025). Susceptibility to auditory feedback manipulations and individual variability. PLoS One, 20(5): e0323201. doi:10.1371/journal.pone.0323201.
Additional information
data -
Özyürek, A. (2025). Multimodal language, diversity and neuro-cognition. In D. Bradley, K. Dziubalska-Kołaczyk, C. Hamans, I.-H. Lee, & F. Steurs (
Eds. ), Contemporary Linguistics Integrating Languages, Communities, and Technologies (pp. 275-284). Leiden: Brill Press. doi:10.1163/9789004715608_023. -
Wu, S.-S., Pan, H., Sheldrick, R. C., Shao, J., Liu, X.-M., Zheng, S.-S., Pereira Soares, S. M., Zhang, L., Sun, J., Xu, P., Chen, S.-H., Sun, T., Pang, J.-W., Wu, N., Feng, Y.-C., Chen, N.-R., Zhang, Y.-T., & Jiang, F. (2025). Development and validation of the Parent-Reported Indicator of Developmental Evaluation for Chinese Children (PRIDE) tool. World Journal of Pediatrics, 21, 183-191. doi:10.1007/s12519-025-00878-7.
Abstract
Background
Developmental delay (DD) poses challenges to children's overall development, necessitating early detection and intervention. Existing screening tools in China focus mainly on children with developmental issues in two or more domains, diagnosed as global developmental delay (GDD). However, the recent rise of early childhood development (ECD) concepts has expanded the focus to include not only those with severe brain development impairments but also children who lag in specific domains due to various social-environmental factors, with the aim of promoting positive development through active intervention. To support this approach, corresponding screening tools need to be developed.
Methods
The current study used a two-phase design to develop and validate the Parent-Reported Indicator of Developmental Evaluation for Chinese Children (PRIDE) tool. In Phase 1, age-specific milestone forms for PRIDE were created through a survey conducted in urban and rural primary care clinics across four economic regions in China. In Phase 2, PRIDE was validated in a community-based sample. Sensitivity and specificity of both PRIDE and Ages and Stages Questionnaires (ASQ)-3 were estimated using inverse probability weights (IPW) and multiple imputation (MI) to address planned and unplanned missing data.
Results
In Phase 1 involving a total of 1160 participants aged 1 to 48 months, 63 items were selected from the initial item pool to create 10 age-specific PRIDE forms. Our Phase 2 study included 777 children within the same age range. PRIDE demonstrated an estimated sensitivity and specificity of 83.3% [95% confidence interval (CI): 56.8%–100.0%] and 84.9% (95% CI: 82.8%–86.9%) in the identification of DD.
Conclusion
The findings suggest that PRIDE holds promise as a sensitive tool for detecting DD in community settings.Additional information
supplementary information -
Papoutsi, C., Tourtouri, E. N., Piai, V., Lampe, L. F., & Meyer, A. S. (2025). Fast and slow errors: What naming latencies of errors reveal about the interplay of attentional control and word planning in speeded picture naming. Journal of Experimental Psychology: Learning, Memory, and Cognition. Advance online publication. doi:10.1037/xlm0001472.
Abstract
Speakers sometimes produce lexical errors, such as saying “salt” instead of “pepper.” This study aimed to better understand the origin of lexical errors by assessing whether they arise from a hasty selection and premature decision to speak (premature selection hypothesis) or from momentary attentional disengagement from the task (attentional lapse hypothesis). We analyzed data from a speeded picture naming task (Lampe et al., 2023) and investigated whether lexical errors are produced as fast as target (i.e., correct) responses, thus arising from premature selection, or whether they are produced more slowly than target responses, thus arising from lapses of attention. Using ex-Gaussian analyses, we found that lexical errors were slower than targets in the tail, but not in the normal part of the response time distribution, with the tail effect primarily resulting from errors that were not coordinates, that is, members of the target’s semantic category. Moreover, we compared the coordinate errors and target responses in terms of their word-intrinsic properties and found that they were overall more frequent, shorter, and acquired earlier than targets. Given the present findings, we conclude that coordinate errors occur due to a premature selection but in the context of intact attentional control, following the same lexical constraints as targets, while other errors, given the variability in their nature, may vary in their origin, with one potential source being lapses of attention. -
Poletiek, F. H., Hagoort, P., & Bocanegra, B. R. (2025). Recalling sequences from memory can explain the distribution of recursive structures in natural languages. Cognition, 264: 106244. doi:10.1016/j.cognition.2025.106244.
Abstract
Language operates within the cognitive machinery of its users. Hence language structure is likely to evolve under the pressure of cognitive constraints (Christiansen & Chater, 2008). The challenge remains, however, in determining precisely how this may have occurred. Hierarchical recursive structures are especially difficult to relate to finite cognitive features. Here, we propose a new cognitive account explaining why Center Embedded recursive structures of relative clauses (as in The boy A1 the dog A2 chases B2 falls B1) (A1A2B2B1) are ubiquitous among thousands of languages, whereas Crossed-Dependent (CD) structures (A1A2B1B2) hardly ever occur. The preponderance of CE grammars is surprising considering they can produce dependent elements at longer distances than CD. We propose that this can be explained by memory retrieval mechanisms combined with linguistic word binding operations (role assignment). Processing CE requires the sequential retrieval of referent words in a backward direction, and CD in a forward direction. We first specify two Retrieval-and-Binding (R&B) functions, from which we derive mathematically that R&B performance under backwards recall (CE) exceeds performance under forward recall (CD). Next, we reanalyze an existing dataset that investigated strategies of recall and review the literature on sequential recall strategies under conditions that mimic sentence processing. The reanalysis verified the predictions of our account and showed that a backwards recall (CE) strategy is superior under conditions relevant to language processing. We suggest that the productive power of recursive embeddings is best conserved in a CE instantiation because memory mechanisms optimally support the processing of this structure, which might explain why CE has prevailed during language evolution. -
Postema, A., Van Mierlo, H., Bakker, A. B., & Barendse, M. T. (2025). Study-to-sports spillover among competitive athletes: A field study. International Journal of Sport and Exercise Psychology, 23(3), lxviii-xci. doi:10.1080/1612197X.2022.2058054.
Abstract
Combining academics and athletics is challenging but important for the psychological and psychosocial development of those involved. However, little is known about how experiences in academics spill over and relate to athletics. Drawing on the enrichment mechanisms proposed by the Work-Home Resources model, we posit that study crafting behaviours are positively related to volatile personal resources, which, in turn, are related to higher athletic achievement. Via structural equation modelling, we examine a path model among 243 student-athletes, incorporating study crafting behaviours and personal resources (i.e., positive affect and study engagement), and self- and coach-rated athletic achievement measured two weeks later. Results show that optimising the academic environment by crafting challenging study demands relates positively to positive affect and study engagement. In turn, positive affect related positively to self-rated athletic achievement, whereas – unexpectedly – study engagement related negatively to coach-rated athletic achievement. Optimising the academic environment through cognitive crafting and crafting social study resources did not relate to athletic outcomes. We discuss how these findings offer new insights into the interplay between academics and athletics. -
Quaresima, A., Fitz, H., Hagoort, P., & Duarte, R. (2025). Nonlinear dendritic integration supports Up-Down states in single neurons. The Journal of Neuroscience, 45(26): e1701242025. doi:10.1523/JNEUROSCI.1701-24.2025.
Abstract
Changes in the activity profile of cortical neurons are due to effects at the scale of local and long-range networks. Accordingly, abrupt transitions in the state of cortical neurons—a phenomenon known as Up-Down states—have been attributed to variation in the activity of afferent neurons. However, cellular physiology and morphology may also play a role in causing Up-Down states. This study examines the impact of dendritic nonlinearities, particularly those mediated by voltage-dependent NMDA receptors, on the response of cortical neurons to balanced excitatory/inhibitory synaptic inputs. Using a neuron model with two segregated dendritic compartments, we compared cells with and without dendritic nonlinearities. NMDA receptors boosted somatic firing in the balanced condition and increased the correlation between membrane potentials across the compartments of the neuron model. Dendritic nonlinearities elicited strong bimodality in the distribution of the somatic potential when the cell was driven with cortical-like input. Moreover, dendritic nonlinearities could detect small input fluctuations and lead to Up-Down states whose statistics and dynamics closely resemble electrophysiological data. Up-Down states also occurred in recurrent networks with oscillatory firing activity, as in anaesthetized animal models, when dendritic NMDA receptors were partially disabled. These findings suggest that there is a dissociation between cellular and network-level features that could both contribute to the emergence of Up-Down states. Our study highlights the complex interplay between dendritic integration and activity-driven dynamics in the origin of cortical bistability. -
Rapado-Tamarit, B., Méndez-Aróstegui, M., de Reus, K., Sarraude, T., Pen, I., & Groothuis, T. G. G. (2025). Age estimation and growth patterns in young harbor seals (Phoca vitulina vitulina) during rehabilitation. Journal of Mammalogy, 106(2), 491-504. doi:10.1093/jmammal/gyae128.
Abstract
To study patterns in behavior, fitness, and population dynamics, estimating the age of the individuals is often a necessity. Specifically, age estimation of young animals is very important for animal rehabilitation centers because it may determine if the animal should be taken in and, if so, what care is optimal for its rehabilitation. Accurate age estimation is also important to determine the growth pattern of an individual, and it is needed to correctly interpret the influence of early body condition on its growth trajectories. The purpose of our study was to find body measurements that function as good age estimators in young (up to 3 months old) harbor seals (Phoca vitulina vitulina), placing emphasis on noninvasive techniques that can be used in the field. To meet this goal, body mass (BM), dorsal standard length (DSL), upper canine length (CL), body condition (BC), and sex were determined from 45 Harbor Seal pups of known age. Generalized additive mixed models were fitted to find how well these morphometric measures predicted age, and the results from the selected model were used to compute growth curves and to create a practical table to determine the age of young animals in the field. We found that both DSL and CL—and to some extent sex—were useful predictors for estimating age in young harbor seals and that the growth rate of pups raised in captivity is significantly lower than for those raised in the wild. In addition, we found no evidence for compensatory growth, given that animals that arrived at the center with a poor BM or BC continued to show lower BM or BC throughout almost the entire rehabilitation period.Additional information
Data availability -
Raviv, L., & Boeckx, C. (
Eds. ). (2025). The Oxford handbook of approaches to language evolution. Oxford: Oxford University Press. -
Raviv, L., Blasi, D., & Kempe, V. (2025). Children are not the main agents of language change. Psychological Review. Advance online publication. doi:10.1037/rev0000580.
Abstract
The long-standing claim that young children are the main agents of language change is often presented as an established fact, and has tacitly guided research in developmental science and evolutionary linguistics. It rests on the assumption that language change arises from language acquisition errors predominantly committed by children. Here, we review whether arguments in support of this idea stand up to logical and empirical scrutiny. We conclude that while children’s imperfect learning indeed leads them to produce input-divergent linguistic variants, there is no convincing evidence that it is these child-generated innovations that eventually spread through the language community, nor that language change is mainly driven by constraints and biases operating uniquely in children. By exposing the conceptual and empirical shortcomings of overemphasizing children as the agents of language change, we hope to rebalance the field toward a more nuanced understanding of how individual- and population-level processes shape language change. -
Raykov, P. P., Daly, J., Fisher, S. E., Eising, E., Geerligs, L., & Bird, C. M. (2025). No effect of apolipoprotein E polymorphism on MRI brain activity during movie watching. Brain and Neuroscience Advances. Advance online publication, 9. doi:10.1177/23982128251314577.
Abstract
Apolipoprotein E ε4 is a major genetic risk factor for Alzheimer’s disease, and some apolipoprotein E ε4 carriers show Alzheimer’s disease–related neuropathology many years before cognitive changes are apparent. Therefore, studying healthy apolipoprotein E genotyped individuals offers an opportunity to investigate the earliest changes in brain measures that may signal the presence of disease-related processes. For example, subtle changes in functional magnetic resonance imaging functional connectivity, particularly within the default mode network, have been described when comparing healthy ε4 carriers to ε3 carriers. Similarly, very mild impairments of episodic memory have also been documented in healthy apolipoprotein E ε4 carriers. Here, we use a naturalistic activity (movie watching), and a marker of episodic memory encoding (transient changes in functional magnetic resonance imaging activity and functional connectivity around so-called ‘event boundaries’), to investigate potential phenotype differences associated with the apolipoprotein E ε4 genotype in a large sample of healthy adults. Using Bayes factor analyses, we found strong evidence against existence of differences associated with apolipoprotein E allelic status. Similarly, we did not find apolipoprotein E-associated differences when we ran exploratory analyses examining: functional system segregation across the whole brain, and connectivity within the default mode network. We conclude that apolipoprotein E genotype has little or no effect on how ongoing experiences are processed in healthy adults. The mild phenotype differences observed in some studies may reflect early effects of Alzheimer’s disease–related pathology in apolipoprotein E ε4 carriers. -
Rheault, F., Mayberg, H., Thiebaut de Schotten, M., Roebroeck, A., & Forkel, S. J. (2025). The scientific value of tractography: Accuracy vs usefulness. Brain Structure & Function, 230: 59. doi:10.1007/s00429-025-02921-9.
Abstract
Tractography has emerged as a central tool for mapping the cerebral white matter architecture. However, its scientific value continues to be a subject of debate, given its inherent limitations in anatomical accuracy. This concise communication showcases key points of a debate held at the 2024 Tract-Anat Retreat, addressing the trade-offs between the accuracy and utility of tractography. While tractography remains constrained by limitations related to resolution, sensitivity, and validation, its usefulness and utility in areas such as surgical planning, disorder prediction, and the elucidation of brain development are emphasized. These perspectives highlight the necessity of context-specific interpretation, anatomically informed algorithms, and the continuous refinement of tractography workflows to achieve an optimal balance between accuracy and utility. -
Rivera-Olvera, A., Houwing, D. J., Ellegood, J., Masifi, S., Martina, S., Silberfeld, A., Pourquie, O., Lerch, J. P., Francks, C., Homberg, J. R., Van Heukelum, S., & Grandjean, J. (2025). The universe is asymmetric, the mouse brain too. Molecular Psychiatry, 30, 489-496. doi:10.1038/s41380-024-02687-2.
Abstract
Hemispheric brain asymmetry is a basic organizational principle of the human brain and has been implicated in various psychiatric conditions, including autism spectrum disorder. Brain asymmetry is not a uniquely human feature and is observed in other species such as the mouse. Yet, asymmetry patterns are generally nuanced, and substantial sample sizes are required to detect these patterns. In this pre-registered study, we use a mouse dataset from the Province of Ontario Neurodevelopmental Network, which comprises structural MRI data from over 2000 mice, including genetic models for autism spectrum disorder, to reveal the scope and magnitude of hemispheric asymmetry in the mouse. Our findings demonstrate the presence of robust hemispheric asymmetry in the mouse brain, such as larger right hemispheric volumes towards the anterior pole and larger left hemispheric volumes toward the posterior pole, opposite to what has been shown in humans. This suggests the existence of species-specific traits. Further clustering analysis identified distinct asymmetry patterns in autism spectrum disorder models, a phenomenon that is also seen in atypically developing participants. Our study shows potential for the use of mouse models in studying the biological bases of typical and atypical brain asymmetry but also warrants caution as asymmetry patterns seem to differ between humans and mice. -
Roebroeck, A., Haber, S., Borra, E., Schiavi, S., Forkel, S. J., Rockland, K., Dyrby, T. B., & Schilling, K. (2025). Animal models are useful in studying human neuroanatomy with tractography. Brain Structure & Function, 230: 79. doi:10.1007/s00429-025-02945-1.
Abstract
Despite the impact of tractography on human brain mapping, direct validation and biological interpretation remain challenging. This short communication summarizes the key points of a debate held at the 2024 Tract-Anat Retreat on whether animal models are useful for studying human neuroanatomy with diffusion MRI tractography. While recognizing limitations, such as anatomical and biological differences between species, hardware and acquisition considerations and direct translation and interpretation, we identified immense value and utility of animal models for tractography including validation with histology, acquiring high-resolution datasets, exploring disease mechanisms, and advancing comparative neuroanatomy. These perspectives highlight the translational potential of preclinical models to inform tractography methodologies and underscore the need for careful species selection, methodological rigor, and ethical oversight in cross-species neuroimaging research. -
Rohrer, P. L., Bujok, R., Van Maastricht, L., & Bosker, H. R. (2025). From “I dance” to “she danced” with a flick of the hands: Audiovisual stress perception in Spanish. Psychonomic Bulletin & Review, 32, 2136-2145. doi:10.3758/s13423-025-02683-9.
Abstract
When talking, speakers naturally produce hand movements (co-speech gestures) that contribute to communication. Evidence in Dutch suggests that the timing of simple up-and-down, non-referential “beat” gestures influences spoken word recognition: the same auditory stimulus was perceived as CONtent (noun, capitalized letters indicate stressed syllables) when a beat gesture occurred on the first syllable, but as conTENT (adjective) when the gesture occurred on the second syllable. However, these findings were based on a small number of minimal pairs in Dutch, limiting the generalizability of the findings. We therefore tested this effect in Spanish, where lexical stress is highly relevant in the verb conjugation system, distinguishing bailo, “I dance” with word-initial stress from bailó, “she danced” with word-final stress. Testing a larger sample (N = 100), we also assessed whether individual differences in working memory capacity modulated how much individuals relied on the gestures in spoken word recognition. The results showed that, similar to Dutch, Spanish participants were biased to perceive lexical stress on the syllable that visually co-occurred with a beat gesture, with the effect being strongest when the acoustic stress cues were most ambiguous. No evidence was found for by-participant effect sizes to be influenced by individual differences in phonological or visuospatial working memory. These findings reveal gestural-speech coordination impacts lexical stress perception in a language where listeners are regularly confronted with such lexical stress contrasts, highlighting the impact of gestures’ timing on prominence perception and spoken word recognition. -
Roos, N. M. (2025). Naming a picture in context: Paving the way to investigate language recovery after stroke. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
link to Radboud Repository -
Rossi, G. (2025). Systems of social action: The case of requesting in Italian. New York: Oxford University Press. doi:10.1093/oso/9780190690731.001.0001.
Abstract
This book is about social action as it is carried out in everyday life. To some readers, the phrase social action may evoke the idea of people taking the initiative for change at the political and economic level of society; these are social actions that take days, months, or years to accomplish. The kinds of actions this book is concerned with are, instead, much more rapid and minute. They are actions performed on the fly, in the back and forth of ordinary interaction; they are actions like questions, answers, complaints, compliments, and requests. As a species of social action, requests serve a basic function: getting help from others. Every day in a community, people make hundreds of requests of one another, most of which are small requests for mundane things such as passing an item or performing a service around the house. Another way of thinking about requests is as the exercise of social influence in our everyday encounters, the kind of influence with tangible effects on the subsequent conduct of those around us. The book reports on an extensive study of requests “in the wild,” through the methodical observation and analysis of naturally occurring social interactions captured on video. Using the case of everyday requests among speakers of Italian, the book shows that our resources for social action are organized in systems—that is, in coherent sets of interdependent practices. It argues that such systems are part of the social order, as they shape, constrain, and enable interaction between people. -
Rowland, C. F., Bidgood, A., Jones, G., Jessop, A., Stinson, P., Pine, J. M., Durrant, S., & Peter, M. S. (2025). Simulating the relationship between nonword repetition performance and vocabulary growth in 2-Year-olds: Evidence from the language 0–5 project. Language Learning, 75(2), 379-423. doi:10.1111/lang.12671.
Abstract
A strong predictor of children's language is performance on non-word repetition (NWR) tasks. However, the basis of this relationship remains unknown. Some suggest that NWR tasks measure phonological working memory, which then affects language growth. Others argue that children's knowledge of language/language experience affects NWR performance. A complicating factor is that most studies focus on school-aged children, who have already mastered key language skills. Here, we present a new NWR task for English-learning 2-year-olds, use it to assess the effect of NWR performance on concurrent and later vocabulary development, and compare the children's performance with that of an experience-based computational model (CLASSIC). The new NWR task produced reliable results; replicating wordlikeness effects, word-length effects, and the relationship with concurrent and later language ability we see in older children. The model also simulated all effects, suggesting that the relationship between vocabulary and NWR performance can be explained by language experience-/knowledge-based theories. -
Rowland, C. F., Westermann, G., Theakston, A. L., Pine, J. M., Monaghan, P., & Lieven, E. V. (2025). Constructing language: A framework for explaining acquisition. Trends in Cognitive Sciences. Advance online publication. doi:10.1016/j.tics.2025.05.015.
Abstract
Explaining how children build a language system is a central goal of research in language acquisition, with broad implications for language evolution, adult language processing, and artificial intelligence (AI). Here, we propose a constructivist framework for future theory-building in language acquisition. We describe four components of constructivism, drawing on wide-ranging evidence to argue that theories based on these components will be well suited to explaining developmental change. We show how adopting a constructivist framework both provides plausible answers to old questions (e.g., how children build linguistic representations from their input) and generates new questions (e.g., how children adapt to the affordances provided by different cultures and languages). -
Rubianes, M., Jiménez-Ortega, L., Muñoz, F., Drijvers, L., Almeida-Rivera, T., Sánchez-García, J., Fondevila, S., Casado, P., & Martín-Loeches, M. (2025). Effects of subliminal emotional facial expressions on language comprehension as revealed by event-related brain potentials. Scientific Reports, 15: 20449. doi:10.1038/s41598-025-06037-2.
Abstract
Emotional facial expressions often take place during communicative face-to-face interactions. Yet little is known as to whether natural spoken processing can be modulated by emotional expressions during online processing. Furthermore, the functional independence of syntactic processing from other cognitive and affective processes remains a long-standing debate in the literature. To address these issues, this study investigated the influence of masked emotional facial expressions on syntactic speech processing. Participants listened to sentences that could contain morphosyntactic anomalies while a masked emotional expression was presented for 16 ms (i.e., subliminally) just preceding the critical word. A larger Left Anterior Negativity (LAN) amplitude was observed for both emotional faces (i.e., happy and angry) compared to neutral ones. Moreover, a larger LAN amplitude was found for angry faces than for happy faces. Finally, a reduced P600 amplitude was observed only for angry faces when compared to neutral faces. Collectively, the results presented here indicate that first-pass syntactic parsing is influenced by emotional visual stimuli even under masked conditions and that this effect extends also to later linguistic processes. These findings constitute evidence in favor of an interactive view of language processing as integrated within a complex and integrated system for human communication.Additional information
supplementary information -
Rubianes, M., Muñoz, F., Drijvers, L., & Martín-Loeches, M. (2025). Brain signal variability is reduced during self-face processing irrespective of emotional facial expressions: Evidence from multiscale entropy analysis. Cortex, 192, 1-17. doi:10.1016/j.cortex.2025.08.007.
Abstract
Prior research shows that self-referential information (e.g., seeing one's own face) is prioritized in human cognition. However, the brain signal variability underlying self-processing remains scarcely treated in the literature. Additionally, less is known about whether the processing of self-referential visual content can be modulated by facial expressions of emotion, as these resemble more natural situations than neutral expressions. This study therefore investigated the brain signal variability underlying self-referential visual processing and its possible interaction with emotional facial expressions, as indexed by multiscale entropy analysis (MSE). This metric captures the temporal complexity or variability contained in neural patterns at varying timescales. Thirty-two participants were presented with distinctive facial identities (self, friend, and unknown) displaying different facial expressions (happy, neutral, and angry) and performed an identity recognition task. Our results showed that brain signal variability decreases in response to self-faces compared to other identities. Similarly, brain signal variability also decreases for friend faces relative to unknown faces. This reduction in complexity could be indicative of greater efficiency during the preferential processing of personally relevant stimuli. Furthermore, the data observed here show that self-processing is unaffected by facial expressions of emotion, suggesting an independent processing of identity from more dynamic facial information, particularly when the task demands are focused on identity recognition. These results provide novel evidence of the moment-to-moment brain signal variability involved in the identity of the self and others. The evidence presented here adds to a growing literature highlighting the relevance of neural variability for understanding brain-behavior relationships. -
Rubio-Fernandez, P. (2025). First acquiring articles in a second language: A new approach to the study of language and social cognition. Lingua, 313: 103851. doi:10.1016/j.lingua.2024.103851.
Abstract
Pragmatic phenomena are characterized by extreme variability, which makes it difficult to draw sound generalizations about the role of social cognition in pragmatic language by and large. I introduce cultural evolutionary pragmatics as a new framework for the study of the interdependence between language and social cognition, and point at the study of common-ground management across languages and ages as a way to test the reliance of pragmatic language on social cognition. I illustrate this new research line with three experiments on article use by second language speakers, whose mother tongue lacks articles. These L2 speakers are known to find article use challenging and it is often argued that their difficulties stem from articles being pragmatically redundant. Contrary to this view, the results of this exploratory study support the view that proficient article use requires automatizing basic socio-cognitive processes, offering a window into the interdependence between language and social cognition. -
Rubio-Fernandez, P., Berke, M. D., & Jara-Ettinger, J. (2025). Tracking minds in communication. Trends in Cognitive Sciences, 29(3), 269-281. doi:10.1016/j.tics.2024.11.005.
Abstract
How might social cognition help us communicate through language? At what levels does this interaction occur? In classical views, social cognition is independent of language, and integrating the two can be slow, effortful, and error-prone. But new research into word level processes reveals that communication
is brimming with social micro-processes that happen in real time, guiding even the simplest choices like how we use adjectives, articles, and demonstratives. We interpret these findings in the context of advances in theoretical models of social cognition and propose a Communicative Mind-Tracking
framework, where social micro-processes aren’t a secondary process in how we use language—they are fundamental to how communication works. -
Sametoğlu, S., Pelt, D. H. M., & Bartels, M. (2025). The association between frequency of social media use, wellbeing, and depressive symptoms: Disentangling genetic and environmental factors. Behavior Genetics, 55, 255-269. doi:10.1007/s10519-025-10224-2.
Abstract
Meta-analyses report small to moderate effect sizes or inconsistent associations (usually around r = -0.10) between wellbeing (WB) and social media use (SMU) and between anxious-depressive symptoms (ADS) and SMU (also around r = 0.10). This study employs the classical twin design, utilizing data from 6492 individuals from the Netherlands Twin Register, including 3369 MZ twins (893 complete twin pairs, 1583 incomplete twin pairs) and 3123 DZ twins (445 complete, 2233 incomplete) to provide insights into the sources of overlap between WB/ADS and SMU. Both hedonic and eudaimonic WB scales were used. SMU was measured by (1) the time spent on different social media platforms (SMUt), (2) the frequency of posting on social media (SMUf), and (3) the number of social media accounts individuals have (SMUn). Our results confirmed the low phenotypic correlations between WB and SMU (between r = -0.09 and 0.04) as well as between ADS and SMU (between r = 0.07 and 0.10). For SMU, heritability estimates between 32 and 72% were obtained. The small but significant phenotypic correlations between WB/ADS and the SMU phenotypes were mainly determined by genetic factors (in the range of 80-90%). For WB and SMU, genetic correlations were between -0.10 and -0.0, and for ADS and SMU genetic correlations were between 0.10 and 0.23. Genetic correlations implied limited but statistically significant sets of genes that affect WB/ADS and SMU levels. Overall, the results indicate that there is evidence that the small associations between WB/ADS and SMU are partly driven by overlapping genetic influences. We encourage researchers and experts to consider more personalized approaches when considering the association between WB and SMU, as well as understanding the reasons for individuals’ observed SMU levels.Additional information
supplementary material -
Sander, J., Zhang, Y., & Rowland, C. F. (2025). Language acquisition occurs in multimodal social interaction: A commentary on Karadöller, Sümer and Özyürek [invited commentary]. First Language: advance online publication. doi:10.1177/01427237251326984.
Abstract
We argue that language learning occurs in triadic interactions, where caregivers and children engage not only with each other but also with objects, actions and non-verbal cues that shape language acquisition. We illustrate this using two studies on real-time interactions in spoken and signed language. The first examines shared book reading, showing how caregivers use speech, gestures and gaze coordination to establish joint attention, facilitating word-object associations. The second study explores joint attention in spoken and signed interactions, demonstrating that signing dyads rely on a wider range of multimodal behaviours – such as touch, vibrations and peripheral gaze – compared to speaking dyads. Our data highlight how different language modalities shape attentional strategies. We advocate for research that fully incorporates the dynamic interplay between language, attention and environment. -
Sander, J., Meister, N.-K., Finkbeiner, T. A., Rowland, C. F., Steinbach, M., Friederici, A. D., Zaccarella, E., & Trettenbrein, P. C. (2025). Deaf signers adapt their eye gaze behaviour when processing an unknown sign language. In D. Barner, N. R. Bramley, A. Ruggeri, & C. M. Walker (
Eds. ), Proceedings of the 47th Annual Meeting of the Cognitive Science Society (CogSci 2025) (pp. 1998-2005).Abstract
Sign languages are perceived visually and externalized using a signer's hands, face, and upper body. During sign language comprehension, deaf signers primarily focus their gaze on the face, while hearing non-signers attend more to the hands of a signer. Little is known about whether deaf signers adapt their gaze behaviour when processing unknown signs. Here, we report eye-tracking data from 15 deaf native signers of German Sign Language (DGS) and 15 hearing non-signers who were presented with videos in either DGS or an unknown sign language, all containing no linguistic mouth actions. Our data confirm that deaf signers generally fixate more on the face of a signer than hearing non-signers who attend to the hands in sign space. Moreover, only deaf signers increase their attention to the hands when processing video stimuli consisting of unknown signs compared to familiar signs, suggesting similar adjustment behaviours as observed in spoken languages.
Additional information
Link to escholarship -
Sander, J., Rowland, C. F., & Lieberman, A. M. (2025). Caregivers use joint attention to support sign language acquisition in deaf children. Developmental Science, 28: e70034. doi:10.1111/desc.70034.
Abstract
Children's ability to share attention with another social partner (joint attention) plays an important role in language development. However, our understanding of the role of joint attention comes mainly from children learning spoken languages, which gives a very narrow, speech-centric impression of the role of joint attention. This study broadens the scope by examining how deaf children learning a sign language achieve joint attention with their caregivers during natural social interaction, and how caregivers provide word learning opportunities. We analyzed naturalistic play sessions of 54 caregiver-child dyads using American Sign Language (ASL), and identified joint attention that surrounded caregivers’ labeling of either familiar or novel objects using a comprehensive multimodal coding scheme. We observed that dyads using ASL establish joint attention using linguistic, visual, and tactile cues, and that most naming events took place in the context of a successful joint attention episode. Key characteristics of these joint attention episodes were significantly correlated with the children's expressive vocabulary size, mirroring the patterns observed for spoken language acquisition. We also found that sign familiarity as well as the order of mention of object labels affected the timing of naming events within joint attention. Our results suggest that caregivers using ASL are highly sensitive to their child's visual attention in interactions and modulate joint attention differently when providing familiar versus novel object labels. These joint attentional episodes facilitate word learning in sign language, just as they do in spoken language interactions. -
Satoer, D., Dulyan, L., & Forkel, S. J. (2025). Oncology: Brain asymmetries in language-relevant brain tumors. In C. Papagno, & P. Corballis (
Eds. ), Cerebral Asymmetries: Handbook of Clinical Neurology (pp. 65-87). Amsterdam: Elsevier.Abstract
Brain tumors are classified as rare diseases, with an annual occurrence of 300,000 cases and account for an annual loss of 241,000 lives, highlighting their devastating nature. Recent advancements in diagnosis and treatment have significantly improved the management and care of brain tumors. This chapter provides an overview of the common types of primary brain tumors affecting language functions—gliomas and meningiomas. Techniques for identifying and mapping critical language areas, including the white matter language system, such as awake brain tumor surgery and diffusion-weighted tractography, are pivotal for understanding language localization and informing personalized treatment approaches. Numerous studies have demonstrated that gliomas in the dominant hemisphere can lead to (often subtle) impairments across various cognitive domains, with a particular emphasis on language. Recently, increased attention has been directed toward (nonverbal) cognitive deficits in patients with gliomas in the nondominant hemisphere, as well as cognitive outcomes in patients with meningiomas, a group historically overlooked. A patient-tailored approach to language and cognitive functions across the pre-, intra-, and postoperative phases is mandatory for brain tumor patients to preserve quality of life. Continued follow-up studies, in conjunction with advanced imaging techniques, are crucial for understanding the brain's potential for neuroplasticity and optimizing patient outcomes. -
Severijnen, G. G. A. (2025). A blessing in disguise: How prosodic variability challenges but also aids successful speech perception. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2025). Is rate-dependent perception affected by linguistic information about the intended syllable rate? Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-025-02746-x.
Abstract
Speech is highly variable in rate, challenging the perception of sound contrasts that are dependent on duration. Listeners deal with such variability by perceiving incoming speech relative to the rate in the surrounding context. For instance, the same ambiguous vowel is more likely to be perceived as being long when embedded in a fast sentence, but as short when embedded in a slow sentence. However, it is still debated to what extent domain-general and domain-specific mechanisms (i.e., language- or speech-specific mechanisms) contribute to rate-dependent perception. Here we examined the role of domain-specific mechanisms in an implicit rate-normalization task in which we manipulated linguistic knowledge about how many syllables words have. Dutch participants were presented with lists of Dutch words that were acoustically ambiguous with regard to having one or two syllables (e.g., /k?ˈlɔm/ can be monosyllabic klom, /klɔm/, or bisyllabic kolom, /ko.ˈlɔm/). While being presented with these ambiguous word lists, they saw monosyllabic or bisyllabic transcriptions of the lists on the screen. We predicted that the same acoustic stimulus would be perceived as faster (more syllables per second) when combined with bisyllabic orthography compared to monosyllabic orthography. In turn, this would lead to downstream influences on vowel length perception in target words embedded within the word lists (rate-dependent perception of Dutch /ɑ/ vs./ /aː/). Despite evidence of successful orthographic disambiguation of the ambiguous word lists, we did not find evidence that linguistic knowledge influenced participants’ rate-dependent perception. Our results are best accounted for by a domain-general account of rate-dependent perception. -
Sha, Z., & Francks, C. (2025). Large-scale genetic mapping for human brain asymmetry. In C. Papagno, & P. Corballis (
Eds. ), Handbook of Clinical Neurology: Cerebral Asymmetries (pp. 241-254). Amsterdam: Elsevier.Abstract
Left-right asymmetry is an important aspect of human brain organization for functions including language and hand motor control, which can be altered in some psychiatric traits. The last five years have seen rapid advances in the identification of specific genes linked to variation in asymmetry of the human brain and/or handedness. These advances have been driven by a new generation of large-scale genome-wide association studies, carried out in samples ranging from roughly 16,000 to over 1.5 million participants. The implicated genes tend to be most active in the embryonic and fetal brain, consistent with early developmental patterning of brain asymmetry. Several of the genes encode components of microtubules, or other microtubule-associated proteins. Microtubules are key elements of the internal cellular skeleton (cytoskeleton). A major challenge remains to understand how these genes affect, or even induce, the brain’s left-right axis. Several of the implicated genes have also been associated with psychiatric or neurological disorders, and polygenic dispositions to autism and schizophrenia have been associated with structural brain asymmetry. Knowledge of developmental mechanisms that lead to hemispheric specialization may ultimately help to define etiologic subtypes of brain disorders. -
Singh, L., Basnight-Brown, D., Cheon, B. K., Garcia, R., Killen, M., & Mazuka, R. (2025). Ethical and epistemic costs of a lack of geographical and cultural diversity in developmental science. Developmental Psychology, 61(1), 1-18. doi:10.1037/dev0001841.
Abstract
Increasing geographical and cultural diversity in research participation has been a key priority for psychological researchers. In this article, we track changes in participant diversity in developmental science over the past decade. These analyses reveal surprisingly modest shifts in global diversity of research participants over time, calling into question the generalizability of our empirical foundation. We provide examples from the study of early child development of the significant epistemic and ethical costs of a lack of geographical and cultural diversity to demonstrate why greater diversification is essential to a generalizable science of human development. We also discuss strategies for diversification that could be implemented throughout the research ecosystem in the service of a culturally anchored, generalizable, and replicable science. -
Slaats, S., & Martin, A. E. (2025). What’s surprising about surprisal. Computational Brain & Behavior, 8, 233-248. doi:10.1007/s42113-025-00237-9.
Abstract
In the computational and experimental psycholinguistic literature, the mechanisms behind syntactic structure building (e.g., combining words into phrases and sentences) are the subject of considerable debate. Much experimental work has shown that surprisal is a good predictor of human behavioral and neural data. These findings have led some authors to model language comprehension in a purely probabilistic way. In this paper, we use simulation to exemplify why surprisal works so well to model human data and to illustrate why exclusive reliance on it can be problematic for the development of mechanistic theories of language comprehension, particularly those with emphasis on meaning composition. Rather than arguing for the importance of structural or probabilistic information to the exclusion or exhaustion of the other, we argue more emphasis should be placed on understanding how the brain leverages both types of information (viz., statistical and structured). We propose that probabilistic information is an important cue to the structure in the message, but is not a substitute for the structure itself—neither computationally, formally, nor conceptually. Surprisal and other probabilistic metrics must play a key role as theoretical objects in any explanatory mechanistic theory of language processing, but that role remains in the service of the brain’s goal of constructing structured meaning from sensory input.Additional information
supplementary materials -
Slim, M. S., Lauwers, P., & Hartsuiker, R. J. (2025). Revisiting the logic in language: The scope of each and every universal quantifier is alike after all. Journal of Memory and Language, 144: 104661. doi:10.1016/j.jml.2025.104661.
Abstract
A doubly-quantified sentence like Every bear approached a tent is ambiguous: Did every bear approach a different tent, or did they approach the same tent? These two interpretations are assumed to be mentally represented as logical representations, which specify how the different quantifiers are assigned scope with respect to each other. Based on a structural priming study, Feiman and Snedeker (2016) argued that logical representations capture quantifier-specific combinatorial properties (e.g., the specification of every differs from the specification of each in logical representations). We re-examined this conclusion by testing logical representation priming in Dutch. Across four experiments, we observed that priming of logical representations emerged if the same quantifiers are repeated in prime and target, but also if the prime and target contained different quantifiers. However, logical representation priming between quantifiers emerged less consistently than priming within the same quantifier. More specifically, our results suggest that priming between quantifiers emerges more robustly if the participant is presented with quantifier variation in the prime trials. When priming between quantifiers emerged, however, its strength was comparable to priming within the same quantifier. Therefore, we conclude that logical representations do not specify quantifier-specific biases in the assignment of scope.Additional information
data and analyses scripts -
Slivac, K., Hagoort, P., & Flecken, M. (2025). Cognitive and neural mechanisms of linguistic influence on perception. Psychological Review, 132(2), 364-379. doi:10.1037/rev0000546.
Abstract
To date, research has reliably shown that language can engage and modify perceptual processes in a top-down manner. However, our understanding of the cognitive and neural mechanisms underlying such top-down influences is still under debate. In this review, we provide an overview of findings from literature investigating the organization of semantic networks in the brain (spontaneous engagement of the visual system while processing linguistic information), and linguistic cueing studies (looking at the immediate effects of language on the perception of a visual target), in an effort to isolate such mechanisms. Additionally, we connect the findings from linguistic cueing studies to those reported in (nonlinguistic) literature on priors in perception, in order to find commonalities in neural processes allowing for top-down influences on perception. In doing so, we discuss the effects of language on perception in the context of broader, general cognitive and neural principles. Finally, we propose a way forward in the study of linguistic influences on perception. -
Slonimska, A., & Özyürek, A. (2025). Methods to study evolution of iconicity in sign languages. In L. Raviv, & C. Boeckx (
Eds. ), The Oxford handbook of approaches to language evolution (pp. 177-194). Oxford: Oxford University Press.Abstract
Sign languages—the conventional languages of deaf communities—have been considered to provide a window into answering some questions regarding language emergence and evolution. In particular, iconicity, defined as the ‘existence of a structure-preserving mapping between mental models of linguistic form and meaning’, is generally regarded as a precursor to the arbitrary and segmental categorical structures found in spoken languages. However, iconic structures are omnipresent in sign languages at all levels of linguistic organization. Thus, there is a necessity for a more nuanced understanding of iconicity and its trajectory in language evolution. In this chapter, we outline different quantitative and qualitative methods to study iconicity and how one can operationalize them at lexical and discourse levels to investigate the role of iconicity in the evolution of sign languages. -
Soberanes, M., Pérez-Ramírez, C. A., & Assaneo, M. F. (2025). Insights into the effect of general attentional state, coarticulation, and primed speech rate in phoneme production time. Journal of Speech, Language, and Hearing Research, 68(4), 1773-1783. doi:10.1044/2025_JSLHR-24-00595.
Abstract
Purpose:
This study aimed to identify how a set of predefined factors modulates phoneme articulation time within a speaker.
Method:
We used a custom in-lab system that records lip muscle activity through electromyography signals, aligned with the produced speech, to measure phoneme articulation time. Twenty Spanish-speaking participants (12 females) were evaluated while producing sequences of a consonant–vowel syllable, with each sequence consisting of repeated articulations of either /pa/ or /pu/. Before starting the sequences, participants underwent a priming step with either a fast or slow speech rate. Additionally, the general attentional state level was assessed at the beginning, middle, and end of the protocol. To analyze the variability in the duration of /p/ and vowel articulation, we fitted individual linear mixed-models considering three factors: general attentional state level, priming rate, and coarticulation effects (for /p/, i.e., followed by /a/ or /u/) or phoneme identity (for vowels, i.e., being /a/ or /u/).
Results:
We found that the level of general attentional state positively correlated with production time for both the consonant /p/ and the vowels. Additionally, /p/ production was influenced by the nature of the following vowel (i.e., coarticulation effects), while vowel production time was affected by the primed speech rate.
Conclusions:
Phoneme duration appears to be influenced by both stable, speaker-specific characteristics (idiosyncratic traits) and internal, state-dependent factors related to the speaker's condition at the time of speech production. While some factors affect both consonants and vowels, others specifically modify only one of these types.Additional information
supplemental material -
Soderstrom, M., Rocha-Hidalgo, J., Munoz, L. E., Bochynska, A., Werker, J. F., Skarabela, B., Seidl, A., Ryjova, Y., Rennels, J. L., Potter, C. E., Paulus, M., Ota, M., Olesen, N. M., Nave, K. M., Mayor, J., Martin, A., Machon, L. C., Lew-Williams, C., Ko, E.-S., Kim, H. Soderstrom, M., Rocha-Hidalgo, J., Munoz, L. E., Bochynska, A., Werker, J. F., Skarabela, B., Seidl, A., Ryjova, Y., Rennels, J. L., Potter, C. E., Paulus, M., Ota, M., Olesen, N. M., Nave, K. M., Mayor, J., Martin, A., Machon, L. C., Lew-Williams, C., Ko, E.-S., Kim, H., Kartushina, N., Kammermeier, M., Jessop, A., Hay, J. F., Hannon, E. E., Hamlin, J. K., Havron, N., Gonzalez-Gomez, N., Gampe, A., Fritzsche, T., Frank, M. C., Durrant, S., Davies, C., Cashon, C., Byers-Heinlein, K., Black, A. K., Bergmann, C., Anderson, L., Alshakhori, M. K., Al-Hoorie, A. H., & Tsui, A. S. M. (2025). Testing the relationship between preferences for infant-directed speech and vocabulary development: A multi-lab study. Journal of Child Language, 52(5), 984-1009. doi:10.1017/S0305000924000254.
Abstract
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.Additional information
supplementary material -
Sóskuthy, M., Dingemanse, M., Winter, B., & Perlman, M. (2025). Reply to: Not just the alveolar trill, but all “r-like” sounds are associated with roughness across languages, pointing to a more general link between sound and touch. Scientific Reports, 15: 13001. doi:10.1038/s41598-025-94854-w.
-
Sotiropoulos, S. N., Thiebaut de Schotten, M., Haber, S. N., & Forkel, S. J. (2025). Cross-species neuroanatomy in primates using tractography. Brain Structure & Function, 230: 75. doi:10.1007/s00429-025-02914-8.
Abstract
Due to their integrative role in brain function, long-range white matter connections exhibit high individual variability, giving rise to personalised brain circuits. This neurovariability is more evident in the connection patterns of brain areas that have evolved more recently. Diffusion MRI tractography allows unique opportunities for comparative neuroanatomy across species to study evolution and provide unique insights into the phylogeny of brain networks, which we overview in this note, inspired by discussions at the International Society for Tractography (IST) retreat. -
Spychalska, M., Haase, V., & Werning, M. (2025). To predict or not to predict: The role of context constraint and truth-value in negation processing. Neuropsychologia, 216: 109167. doi:10.1016/j.neuropsychologia.2025.109167.
Abstract
Studies on negation processing frequently report a polarity-by-truth interaction: False affirmative sentences usually show longer response times and larger N400 amplitudes compared to true affirmative sentences, whereas for negative sentences the effect of truth-value is typically reversed. This interaction has repeatedly been linked to factors such as lexical associations, predictability, or to the need of constructing two subsequent mental representations during the comprehension of negative sentences. In a series of ERP experiments using a picture-sentence verification paradigm, we investigated how sentence polarity, truth-value and predictability interact during sentence processing. Predictability was manipulated by varying the number of alternative sentence continuations provided by the context, similarly for both sentence polarities. For both affirmative and negative sentences, true sentences elicited reduced N400 amplitudes in strongly constraining contexts—where a specific continuation was highly predictable—compared to weakly constraining contexts, where no clear prediction could be made. For false sentences, the effect of context was reversed for both sentence polarities. Crucially, the effect of Truth was dependent on predictability rather than sentence polarity: Both affirmative and negative sentences showed the same direction of the effect of Truth, namely, larger N400s for false rather than true sentences in the strongly constraining context, and the opposite pattern in the weakly constraining context, although the size of these effects differed across the two polarities. In addition, we observe a long-lasting positivity effect for negation, in both context conditions, for both truth-values and across all five experiments. We interpret this effect as reflecting inhibitory mechanisms recruited during negation processing. -
Stivers, T., & Rossi, G. (2025). Finding codability: Ways to code and quantify interaction for Conversation Analysts. Research on Language and Social Interaction, 58(3), 240-257. doi:10.1080/08351813.2025.2528491.
Abstract
Coding social interaction has become increasingly attractive for conversation analysts interested in mixed-methods research as a way to demonstrate the robustness of qualitative findings, test relationships between interactional and exogenous variables, and reach a wider audience. However, coding is valuable to conversation analysts only when it is done in a way that attends to participants’ orientations to the phenomenon of study. The puzzle then is how to turn the messy richness of conversational data into codes that are interactionally meaningful and valid. In this article, we draw on the existing literature and our own past projects to discuss opportunities and challenges involved in coding social interaction, with an emphasis on three main aspects of the process: constraining a phenomenon by sequential and formal criteria; transforming behavior into variables; and identifying social actions. Data are in English and Italian. -
Sümer, B., & Özyürek, A. (2025). Action bias in describing object locations by signing children. Sign Language and Linguistics. Advance online publication. doi:10.1075/sll.24008.sum.
Abstract
This study investigates the role of action bias in the acquisition of classifier constructions by deaf children acquiring Turkish Sign Language (TİD). While classifier handshapes are morphologically complex and iconic, deaf children (aged 7–9) were found to prefer handling classifiers (reflecting the actions performed by agents) more than signing adults, even in contexts requiring entity classifiers (reflecting the visual properties of their referents). The findings reveal that children’s frequent use of action-based lexical signs for nouns influenced their classifier preferences, suggesting a cognitive bias toward motoric representations. Furthermore, our results suggest the use of handling classifiers in intransitive contexts — even by adult signers — thus indicating a new type of variability in classifier use, which has not been reported for other sign languages before. These results provide new insights into how iconicity and lexical context shape the developmental trajectory of classifier constructions in sign language acquisition. -
Tatsumi, T., & Pine, J. (2025). Shifting toward progressive and balanced interaction: A longitudinal corpus study of children’s responses to Who-questions in Japanese. Journal of Child Language. Advance online publication. doi:10.1017/S0305000925000029.
Abstract
Children’s speech becomes longer and more complex as they develop, but the reasons for this have been insufficiently studied. This study examines how changing linguistic choices in children are linked to interactive factors by analysing Who-question sequences in Japanese child–caregiver conversations. The interactive factors in focus are progressivity and balanced joint activity, which are core aspects of conversational interaction. Our analysis reveals that as children respond to Who-questions, their responses grow in length and multifunctionality. This growth is positively associated with progressivity, namely a quicker completion of the question sequence, and reduced functional load in the interlocutor’s contributions, resulting in more balanced joint activity. These findings suggest that children adapt their linguistic choices by observing and aligning them with their interactive goals in conversational sequences. -
Temiz, G., Bağçeci, İ., Günhan Şenol, N. E., & Bulut, T. (2025). No evidence for dissociation of Turkish nouns and verbs in Broca's and Wernicke's areas: A transcranial magnetic stimulation study. Journal of Neurolinguistics, 75: 101260. doi:10.1016/j.jneuroling.2025.101260.
Abstract
It is not clear whether the grammatical distinction between nouns and verbs serves as an organizational principle for representation of the lexicon in the brain, or whether semantic differences between the two categories such as imageability account for any cortical segregation between them. In this study, we used repetitive transcranial magnetic stimulation (rTMS) and lexical decision tasks to test whether Broca's area would be associated with verbs and Wernicke's area with nouns, and whether imageability and lexical status (real words versus pseudowords) would modulate representation of nouns and verbs in Broca's area and Wernicke's area. We assumed that if nouns and verbs are dissociated in these regions then their suppression would lead to a selective slowdown in lexical decision times for one or the other word category, which may be modulated by imageability and lexical status. On two different days, Broca's area and Wernicke's area were suppressed using low-frequency rTMS, and lexical decision times on Turkish nouns and verbs were collected before and immediately after the stimulation sessions. Using linear mixed-effects models with item- and trial-level predictors and covariates (imageability, lemma frequency, length in letters and presentation order), we failed to find any evidence for dissociation of nouns and verbs in Broca's area and Wernicke's area, or for an effect of imageability and lexical status on such purported dissociation. The analyses revealed a significant interaction between stimulation session and lexical status (real words versus pseudowords) in Broca's area, but not in Wernicke's area, implicating Broca's area with real words more than pseudowords. In addition, several behavioral effects were observed including the word frequency effect (faster RTs for frequent than infrequent words), word superiority effect (faster RTs for real words than pseudowords) and word category effect (faster RTs for nouns than verbs). In conclusion, our findings on Turkish nouns and verbs do not provide any evidence that grammatical category is a lexical organizational principle in Broca's or Wernicke's areas.Additional information
supplementary material -
Ter Bekke, M., Drijvers, L., & Holler, J. (2025). Co-speech hand gestures are used to predict upcoming meaning. Psychological Science, 36(4), 237-248. doi:10.1177/09567976251331041.
Abstract
In face-to-face conversation, people use speech and gesture to convey meaning. Seeing gestures alongside speech facilitates comprehenders’ language processing, but crucially, the mechanisms underlying this facilitation remain unclear. We investigated whether comprehenders use the semantic information in gestures, typically preceding related speech, to predict upcoming meaning. Dutch adults listened to questions asked by a virtual avatar. Questions were accompanied by an iconic gesture (e.g., typing) or meaningless control movement (e.g., arm scratch) followed by a short pause and target word (e.g., “type”). A Cloze experiment showed that gestures improved explicit predictions of upcoming target words. Moreover, an EEG experiment showed that gestures reduced alpha and beta power during the pause, indicating anticipation, and reduced N400 amplitudes, demonstrating facilitated semantic processing. Thus, comprehenders use iconic gestures to predict upcoming meaning. Theories of linguistic prediction should incorporate communicative bodily signals as predictive cues to capture how language is processed in face-to-face interaction.Additional information
supplementary material -
Ter Bekke, M. (2025). On how gestures facilitate prediction and fast responding during conversation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Thothathiri, M., Kidd, E., & Rowland, C. F. (2025). The role of executive function in the processing and acquisition of syntax. Royal Society Open Science, 12: 201497. doi:10.1098/rsos.201497.
Abstract
Language acquisition is multifaceted, relying on cognitive and social abilities in addition to language-specific skills. We hypothesized that executive function (EF) may assist language development by enabling children to revise misinterpretations during online processing, encode language input more accurately and/or learn non-canonical sentence structures like the passive better over time. One hundred and twenty Dutch preschoolers each completed three sessions of testing (pre-test, exposure and post-test). During pre-test and post-test, we measured their comprehension of passive sentences and performance in three EF tasks. In the exposure session, we tracked children’s eye movements as they listened to passive (and other) sentences. Each child was also assessed for short-term memory and receptive language. Multiple regression evaluated the relationship between EF and online processing and longer-term learning. EF predicted online revision accuracy, while controlling for receptive language, prior passive knowledge and short-term memory, consistent with theories linking EF to the revision of misinterpretations. EF was also associated with longer-term learning, but the results could not disentangle EF from receptive language. These findings broadly support a role for EF in language acquisition, including a specific role in revision during sentence processing and potentially other roles that depend on reciprocal interaction between EF and receptive language. -
Tilston, O., Holler, J., & Bangerter, A. (2025). Opening social interactions: The coordination of approach, gaze, speech and handshakes during greetings. Cognitive Science, 49(2): e70049. doi:10.1111/cogs.70049.
Abstract
Despite the importance of greetings for opening social interactions, their multimodal coordination processes remain poorly understood. We used a naturalistic, lab-based setup where pairs of unacquainted participants approached and greeted each other while unaware their greeting behavior was studied. We measured the prevalence and time course of multimodal behaviors potentially culminating in a handshake, including motor behaviors (e.g., walking, standing up, hand movements like raise, grasp, and retraction), gaze patterns (using eye tracking glasses), and speech (close and distant verbal salutations). We further manipulated the visibility of partners’ eyes to test its effect on gaze. Our findings reveal that gaze to a partner's face increases over the course of a greeting, but is partly averted during approach and is influenced by the visibility of partners’ eyes. Gaze helps coordinate handshakes, by signaling intent and guiding the grasp. The timing of adjacency pairs in verbal salutations is comparable to the precision of floor transitions in the main body of conversations, and varies according to greeting phase, with distant salutation pair parts featuring more gaps and close salutation pair parts featuring more overlap. Gender composition and a range of multimodal behaviors affect whether pairs chose to shake hands or not. These findings fill several gaps in our understanding of greetings and provide avenues for future research, including advancements in social robotics and human−robot interaction. -
Tkalcec, A., Baldassarri, A., Junghans, A., Somasundaram, V., Menks, W. M., Fehlbaum, L. V., Borbàs, R., Raschle, N., Seeger‐Schneider, G., Jenny, B., Walitza, S., Cole, D. M., Sterzer, P., Santini, F., Herbrecht, E., Cubillo, A., & Stadler, C. (2025). Gaze behavior, facial emotion processing, and neural underpinnings: A comparison of adolescents with autism spectrum disorder and conduct disorder. The Journal of Child Psychology and Psychiatry. Advance online publication. doi:10.1111/jcpp.14172.
Abstract
Background
Facial emotion processing deficits and atypical eye gaze are often described in individuals with autism spectrum disorder (ASD) and those with conduct disorder (CD) and high callous unemotional (CU) traits. Yet, the underlying neural mechanisms of these deficits are still unclear. The aim of this study was to investigate if eye gaze can partially account for the differences in brain activation in youth with ASD, with CD, and typically developing youth (TD).
Methods
In total, 105 adolescent participants (NCD = 39, NASD = 27, NTD = 39; mean age = 15.59 years) underwent a brain functional imaging session including eye tracking during an implicit emotion processing task while parents/caregivers completed questionnaires. Group differences in gaze behavior (number of fixations to the eye and mouth regions) for different facial expressions (neutral, fearful, angry) presented in the task were investigated using Bayesian analyses. Full-factorial models were used to investigate group differences in brain activation with and without including gaze behavior parameters and focusing on brain regions underlying facial emotion processing (insula, amygdala, and medial prefrontal cortex).
Results
Youth with ASD showed increased fixations on the mouth compared to TD and CD groups. CD participants with high CU traits tended to show fewer fixations to the eye region compared to TD for all emotions. Brain imaging results show higher right anterior insula activation in the ASD compared with the CD group when angry faces were presented. The inclusion of gaze behavior parameters in the model reduced the size of that cluster.
Conclusions
Differences in insula activation may be partially explained by gaze behavior. This implies an important role of gaze behavior in facial emotion processing, which should be considered for future brain imaging studies. In addition, our results suggest that targeting gaze behavior in interventions might be potentially beneficial for disorders showing impairments associated with the processing of emotional faces. The relation between eye gaze, CU traits, and neural function in different diagnoses needs further clarification in larger samples.
Additional information
supporting information -
Trujillo, J. P., & Holler, J. (2025). Multimodal information density is highest in question beginnings, and early entropy is associated with fewer but longer visual signals. Discourse Processes, 62(2), 69-88. doi:10.1080/0163853X.2024.2413314.
Abstract
When engaged in spoken conversation, speakers convey meaning using both speech and visual signals, such as facial expressions and manual gestures. An important question is how information is distributed in utterances during face-to-face interaction when information from visual signals is also present. In a corpus of casual Dutch face-to-face conversations, we focus on spoken questions in particular because they occur frequently, thus constituting core building blocks of conversation. We quantified information density (i.e. lexical entropy and surprisal) and the number and relative duration of facial and manual signals. We tested whether lexical information density or the number of visual signals differed between the first and last halves of questions, as well as whether the number of visual signals occurring in the less-predictable portion of a question was associated with the lexical information density of the same portion of the question in a systematic manner. We found that information density, as well as number of visual signals, were higher in the first half of questions, and specifically lexical entropy was associated with fewer, but longer visual signals. The multimodal front-loading of questions and the complementary distribution of visual signals and high entropy words in Dutch casual face-to-face conversations may have implications for the parallel processes of utterance comprehension and response planning during turn-taking.Additional information
supplemental material -
Trujillo, J. P., Dyer, R. M. K., & Holler, J. (2025). Dyadic differences in empathy scores are associated with kinematic similarity during conversational question-answer pairs. Discourse Processes, 62(3), 195-213. doi:10.1080/0163853X.2025.2467605.
Abstract
During conversation, speakers coordinate and synergize their behaviors at multiple levels, and in different ways. The extent to which individuals converge or diverge in their behaviors during interaction may relate to interpersonal differences relevant to social interaction, such as empathy as measured by the empathy quotient (EQ). An association between interpersonal difference in empathy and interpersonal entrainment could help to throw light on how interlocutor characteristics influence interpersonal entrainment. We investigated this possibility in a corpus of unconstrained conversation between dyads. We used dynamic time warping to quantify entrainment between interlocutors of head motion, hand motion, and maximum speech f0 during question–response sequences. We additionally calculated interlocutor differences in EQ scores. We found that, for both head and hand motion, greater difference in EQ was associated with higher entrainment. Thus, we consider that people who are dissimilar in EQ may need to “ground” their interaction with low-level movement entrainment. There was no significant relationship between f0 entrainment and EQ score differences. -
Tsomokos, D. I., & Raviv, L. (2025). A bidirectional association between language development and prosocial behaviour in childhood: Evidence from a longitudinal birth cohort in the United Kingdom. Developmental Psychology, 61(2), 336-350. doi:10.1037/dev0001875.
Abstract
This study investigated a developmental cascade between prosocial and linguistic abilities in a large sample (N = 11,051) from the general youth population in the United Kingdom (50% female, 46% living in disadvantaged neighborhoods, 13% non-White). Cross-lagged panel models showed that verbal ability at age 3 predicted prosociality at age 7, which in turn predicted verbal ability at age 11. Latent growth models also showed that gains in prosociality between 3 and 5 years were associated with increased verbal ability between 5 and 11 years and vice versa. Theory of mind and social competence at age 5 mediated the association between early childhood prosociality and late childhood verbal ability. These results remained robust even after controlling for socioeconomic factors, maternal mental health, parenting microclimate in the home environment, and individual characteristics (sex, ethnicity, and special educational needs). The findings suggest that language skills could be boosted through mentalizing activities and prosocial behaviors. -
Uluşahin, O. (2025). Voices in our heads: Talker-specific listening and speaking. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
link to Radboud Repository -
Ünal, E., Kırbaşoğlu, K., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2025). Gesture reduces mapping difficulties in the development of spatial language depending on the complexity of spatial relations. Cognitive Science, 49(2): e70046. doi:10.1111/cogs.70046.
Abstract
In spoken languages, children acquire locative terms in a cross-linguistically stable order. Terms similar in meaning to in and on emerge earlier than those similar to front and behind, followed by left and right. This order has been attributed to the complexity of the relations expressed by different locative terms. An additional possibility is that children may be delayed in expressing certain spatial meanings partly due to difficulties in discovering the mappings between locative terms in speech and spatial relation they express. We investigate cognitive and mapping difficulties in the domain of spatial language by comparing how children map spatial meanings onto speech versus visually motivated forms in co-speech gesture across different spatial relations. Twenty-four 8-year-old and 23 adult native Turkish-speakers described four-picture displays where the target picture depicted in-on, front-behind, or left-right relations between objects. As the complexity of spatial relations increased, children were more likely to rely on gestures as opposed to speech to informatively express the spatial relation. Adults overwhelmingly relied on speech to informatively express the spatial relation, and this did not change across the complexity of spatial relations. Nevertheless, even when spatial expressions in both speech and co-speech gesture were considered, children lagged behind adults when expressing the most complex left-right relations. These findings suggest that cognitive development and mapping difficulties introduced by the modality of expressions interact in shaping the development of spatial language.Additional information
list of stimuli and descriptions -
Vágvölgy, R., Bergström, K., Bulajic, A., Rüsseler, J., Fernandes, T., Grosche, M., Klatte, M., Huettig, F., & Lachmann, T. (2025). The cognitive profile of adults with low literacy skills in alphabetic orthographies: A systematic review and comparison with developmental dyslexia. Educational Research Review, 46: 100659. doi:10.1016/j.edurev.2024.100659.
Abstract
Dealing with text is crucial in modern societies. However, not everyone acquires sufficient literacy skills during school education. This systematic review summarizes and synthesizes research on adults with low literacy skills (ALLS) in alphabetic writing systems, includes results from behavioral and neurobiological studies, and compares these findings with those on developmental dyslexia given that this developmental disorder is one possible explanation for low literacy skills in adulthood. Twenty-seven studies focusing on the cognitive profile of ALLS met the three predefined criteria of reading level, age, and education. Results showed that ALLS performed worse than literate adults in various tasks at skill and information processing level, and exhibited structural and functional differences at the neurobiological level. The cognitive profile of ALLS was closer to that of primary school children than of literate adults. However, relative to children, ALLS’ literacy skills relied less on phonological and more on orthographic strategies. A narrative comparison of results with meta-analyses on developmental dyslexia showed large, though not complete, overlap in the cognitive profiles. The present results helps to better understand the literacy skills and reading-related cognitive functions of ALLS and may support the development of tailored interventions directed to the specific cognitive difficulties ALLS have.Additional information
supplementary file
Share this page