Publications

Displaying 1 - 100 of 348
  • He, J. (2023). Coordination of spoken language production and comprehension: How speech production is affected by irrelevant background speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Abbondanza, F., Dale, P. S., Wang, C. A., Hayiou‐Thomas, M. E., Toseeb, U., Koomar, T. S., Wigg, K. G., Feng, Y., Price, K. M., Kerr, E. N., Guger, S. L., Lovett, M. W., Strug, L. J., Van Bergen, E., Dolan, C. V., Tomblin, J. B., Moll, K., Schulte‐Körne, G., Neuhoff, N., Warnke, A. and 13 moreAbbondanza, F., Dale, P. S., Wang, C. A., Hayiou‐Thomas, M. E., Toseeb, U., Koomar, T. S., Wigg, K. G., Feng, Y., Price, K. M., Kerr, E. N., Guger, S. L., Lovett, M. W., Strug, L. J., Van Bergen, E., Dolan, C. V., Tomblin, J. B., Moll, K., Schulte‐Körne, G., Neuhoff, N., Warnke, A., Fisher, S. E., Barr, C. L., Michaelson, J. J., Boomsma, D. I., Snowling, M. J., Hulme, C., Whitehouse, A. J. O., Pennell, C. E., Newbury, D. F., Stein, J., Talcott, J. B., Bishop, D. V. M., & Paracchini, S. (2023). Language and reading impairments are associated with increased prevalence of non‐right‐handedness. Child Development, 94(4), 970-984. doi:10.1111/cdev.13914.

    Abstract

    Handedness has been studied for association with language-related disorders because of its link with language hemispheric dominance. No clear pattern has emerged, possibly because of small samples, publication bias, and heterogeneous criteria across studies. Non-right-handedness (NRH) frequency was assessed in N = 2503 cases with reading and/or language impairment and N = 4316 sex-matched controls identified from 10 distinct cohorts (age range 6–19 years old; European ethnicity) using a priori set criteria. A meta-analysis (Ncases = 1994) showed elevated NRH % in individuals with language/reading impairment compared with controls (OR = 1.21, CI = 1.06–1.39, p = .01). The association between reading/language impairments and NRH could result from shared pathways underlying brain lateralization, handedness, and cognitive functions.

    Additional information

    supplementary information
  • Agirrezabal, M., Paggio, P., Navarretta, C., & Jongejan, B. (2023). Multimodal detection and classification of head movements in face-to-face conversations: Exploring models, features and their interaction. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527200.

    Abstract

    In this work we perform multimodal detection and classification
    of head movements from face to face video conversation data.
    We have experimented with different models and feature sets
    and provided some insight on the effect of independent features,
    but also how their interaction can enhance a head movement
    classifier. Used features include nose, neck and mid hip position
    coordinates and their derivatives together with acoustic features,
    namely, intensity and pitch of the speaker on focus. Results
    show that when input features are sufficiently processed by in-
    teracting with each other, a linear classifier can reach a similar
    performance to a more complex non-linear neural model with
    several hidden layers. Our best models achieve state-of-the-art
    performance in the detection task, measured by macro-averaged
    F1 score.
  • Ahn, D., & Ferreira, V. S. (2024). Shared vs separate structural representations: Evidence from cumulative cross-language structural priming. Quarterly Journal of Experimental Psychology, 77(1), 174-190. doi:10.1177/17470218231160942.

    Abstract

    How do bilingual speakers represent the information that guides the assembly of words into sentences for their two languages? The shared-syntax account argues that bilinguals have a single, shared representation of the sentence structures that exist in both languages. Structural priming has been shown to be equal within and across languages, providing support for the shared-syntax account. However, equivalent levels of structural priming within and across languages could be observed even if structural representations are separate and connected, due to frequent switches between languages, which is a property of standard structural priming paradigms. Here, we investigated whether cumulative structural priming (i.e., structural priming across blocks rather than trial-by-trial), which does not involve frequent switches between languages, also shows equivalent levels of structural priming within- and cross-languages. Mixed results point towards a possibility that cumulative structural priming can be more persistent within- compared to cross-languages, suggesting a separate-and-connected account of bilingual structural representations. We discuss these results in terms of the current literature on bilingual structural representations and highlight the value of diversity in paradigms and less-studied languages.
  • Ahn, D., Ferreira, V. S., & Gollan, T. H. (2024). Structural representation in the native language after extended second-language immersion: Evidence from acceptability judgment and memory-recall. Bilingualism: Language and Cognition. Advance online publication. doi:10.1017/S1366728923000950.

    Abstract

    Knowing the sentence structures (i.e., information that guides the assembly of words into sentences) is crucial in language knowledge. This knowledge must be stable for successful communication, but when learning another language that uses different structures, speakers must adjust their structural knowledge. Here, we examine how newly acquired second language (L2) knowledge influences first language (L1) structure knowledge. We compared two groups of Korean speakers: Korean-immersed speakers living in Korea (with little English exposure) versus English-immersed speakers who acquired English late and were living in the US (with more English exposure). We used acceptability judgment and sentence production tasks on Korean sentences in English and Korean word orders. Results suggest that acceptability and structural usage in L1 change after exposure to L2, but not in a way that matches L2 structures. Instead, L2 exposure might lead to increased difficulties in the selection and retrieval of word orders while using L1.
  • Alhama, R. G., Rowland, C. F., & Kidd, E. (2023). How does linguistic context influence word learning? Journal of Child Language, 50(6), 1374-1393. doi:10.1017/S0305000923000302.

    Abstract

    While there are well-known demonstrations that children can use distributional information to acquire multiple components of language, the underpinnings of these achievements are unclear. In the current paper, we investigate the potential pre-requisites for a distributional learning model that can explain how children learn their first words. We review existing literature and then present the results of a series of computational simulations with Vector Space Models, a type of distributional semantic model used in Computational Linguistics, which we evaluate against vocabulary acquisition data from children. We focus on nouns and verbs, and we find that: (i) a model with flexibility to adjust for the frequency of events provides a better fit to the human data, (ii) the influence of context words is very local, especially for nouns, and (iii) words that share more contexts with other words are harder to learn.
  • Anichini, M., de Reus, K., Hersh, T. A., Valente, D., Salazar-Casals, A., Berry, C., Keller, P. E., & Ravignani, A. (2023). Measuring rhythms of vocal interactions: A proof of principle in harbour seal pups. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210477. doi:10.1098/rstb.2021.0477.

    Abstract

    Rhythmic patterns in interactive contexts characterize human behaviours such as conversational turn-taking. These timed patterns are also present in other animals, and often described as rhythm. Understanding fine-grained temporal adjustments in interaction requires complementary quantitative methodologies. Here, we showcase how vocal interactive rhythmicity in a non-human animal can be quantified using a multi-method approach. We record vocal interactions in harbour seal pups (Phoca vitulina) under controlled conditions. We analyse these data by combining analytical approaches, namely categorical rhythm analysis, circular statistics and time series analyses. We test whether pups' vocal rhythmicity varies across behavioural contexts depending on the absence or presence of a calling partner. Four research questions illustrate which analytical approaches are complementary versus orthogonal. For our data, circular statistics and categorical rhythms suggest that a calling partner affects a pup's call timing. Granger causality suggests that pups predictively adjust their call timing when interacting with a real partner. Lastly, the ADaptation and Anticipation Model estimates statistical parameters for a potential mechanism of temporal adaptation and anticipation. Our analytical complementary approach constitutes a proof of concept; it shows feasibility in applying typically unrelated techniques to seals to quantify vocal rhythmic interactivity across behavioural contexts.

    Additional information

    supplemental information
  • Anijs, M. (2024). Networks within networks: Probing the neuronal and molecular underpinnings of language-related disorders using human cell models. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Arana, S., Pesnot Lerousseau, J., & Hagoort, P. (2023). Deep learning models to study sentence comprehension in the human brain. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2023.2198245.

    Abstract

    Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding. As such, they could be interesting models of the integration of linguistic information in the human brain. We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension. Two main results emerge. First, the neural representation of word meaning aligns with the context-dependent, dense word vectors used by the artificial neural networks. Second, the processing hierarchy that emerges within artificial neural networks broadly matches the brain, but is surprisingly inconsistent across studies. We discuss current challenges in establishing artificial neural networks as process models of natural language comprehension. We suggest exploiting the highly structured representational geometry of artificial neural networks when mapping representations to brain data.

    Additional information

    link to preprint
  • Arana, S., Hagoort, P., Schoffelen, J.-M., & Rabovsky, M. (2024). Perceived similarity as a window into representations of integrated sentence meaning. Behavior Research Methods, 56(3), 2675-2691. doi:10.3758/s13428-023-02129-x.

    Abstract

    When perceiving the world around us, we are constantly integrating pieces of information. The integrated experience consists of more than just the sum of its parts. For example, visual scenes are defined by a collection of objects as well as the spatial relations amongst them and sentence meaning is computed based on individual word semantic but also syntactic configuration. Having quantitative models of such integrated representations can help evaluate cognitive models of both language and scene perception. Here, we focus on language, and use a behavioral measure of perceived similarity as an approximation of integrated meaning representations. We collected similarity judgments of 200 subjects rating nouns or transitive sentences through an online multiple arrangement task. We find that perceived similarity between sentences is most strongly modulated by the semantic action category of the main verb. In addition, we show how non-negative matrix factorization of similarity judgment data can reveal multiple underlying dimensions reflecting both semantic as well as relational role information. Finally, we provide an example of how similarity judgments on sentence stimuli can serve as a point of comparison for artificial neural networks models (ANNs) by comparing our behavioral data against sentence similarity extracted from three state-of-the-art ANNs. Overall, our method combining the multiple arrangement task on sentence stimuli with matrix factorization can capture relational information emerging from integration of multiple words in a sentence even in the presence of strong focus on the verb.
  • Araujo, S., Narang, V., Misra, D., Lohagun, N., Khan, O., Singh, A., Mishra, R. K., Hervais-Adelman, A., & Huettig, F. (2023). A literacy-related color-specific deficit in rapid automatized naming: Evidence from neurotypical completely illiterate and literate adults. Journal of Experimental Psychology: General, 152(8), 2403-2409. doi:10.1037/xge0001376.

    Abstract

    There is a robust positive relationship between reading skills and the time to name aloud an array of letters, digits, objects, or colors as quickly as possible. A convincing and complete explanation for the direction and locus of this association remains, however, elusive. In this study we investigated rapid automatized naming (RAN) of every-day objects and basic color patches in neurotypical illiterate and literate adults. Literacy acquisition and education enhanced RAN performance for both conceptual categories but this advantage was much larger for (abstract) colors than every-day objects. This result suggests that (i) literacy/education may be causal for serial rapid naming ability of non-alphanumeric items, (ii) differences in the lexical quality of conceptual representations can underlie the reading-related differential RAN performance.

    Additional information

    supplementary text
  • Aravena-Bravo, P., Cristia, A., Garcia, R., Kotera, H., Nicolas, R. K., Laranjo, R., Arokoyo, B. E., Benavides-Varela, S., Benders, T., Boll-Avetisyan, N., Cychosz, M., Ben, R. D., Diop, Y., Durán-Urzúa, C., Havron, N., Manalili, M., Narasimhan, B., Omane, P. O., Rowland, C. F., Kolberg, L. S. Aravena-Bravo, P., Cristia, A., Garcia, R., Kotera, H., Nicolas, R. K., Laranjo, R., Arokoyo, B. E., Benavides-Varela, S., Benders, T., Boll-Avetisyan, N., Cychosz, M., Ben, R. D., Diop, Y., Durán-Urzúa, C., Havron, N., Manalili, M., Narasimhan, B., Omane, P. O., Rowland, C. F., Kolberg, L. S., Ssemata, A. S., Styles, S. J., Troncoso-Acosta, B., & Woon, F. T. (2023). Towards diversifying early language development research: The first truly global international summer/winter school on language acquisition (/L+/) 2021. Journal of Cognition and Development. Advance online publication. doi:10.1080/15248372.2023.2231083.

    Abstract

    With a long-term aim of empowering researchers everywhere to contribute to work on language development, we organized the First Truly Global /L+/ International Summer/ Winter School on Language Acquisition, a free 5-day virtual school for early career researchers. In this paper, we describe the school, our experience organizing it, and lessons learned. The school had a diverse organizer team, composed of 26 researchers (17 from under represented areas: Subsaharan Africa, South and Southeast Asia, and Central and South America); and a diverse volunteer team, with a total of 95 volunteers from 35 different countries, nearly half from under represented areas. This helped world-wide Page 5 of 5 promotion of the school, leading to 958 registrations from 88 different countries, with 300 registrants (based in 63 countries, 80% from under represented areas) selected to participate in the synchronous aspects of the event. The school employed asynchronous (pre-recorded lectures, which were close-captioned) and synchronous elements (e.g., discussions to place the recorded lectures into participants' context; networking events) across three time zones. A post-school questionnaire revealed that 99% of participants enjoyed taking part in the school. Not with standing these positive quantitative outcomes, qualitative comments suggested we fell short in several areas, including the geographic diversity among lecturers and greater customization of contents to the participants’ contexts. Although much remains to be done to promote inclusivity in linguistic research, we hope our school will contribute to empowering researchers to investigate and publish on language acquisition in their home languages, to eventually result in more representative theories and empirical generalizations

    Additional information

    https://osf.io/fbnda
  • Assmann, M., Büring, D., Jordanoska, I., & Prüller, M. (2023). Towards a theory of morphosyntactic focus marking. Natural Language & Linguistic Theory. doi:10.1007/s11049-023-09567-4.

    Abstract

    Based on six detailed case studies of languages in which focus is marked morphosyntactically, we propose a novel formal theory of focus marking, which can capture these as well as the familiar English-type prosodic focus marking. Special attention is paid to the patterns of focus syncretism, that is, when different size and/or location of focus are indistinguishably realized by the same form.

    The key ingredients to our approach are that complex constituents (not just words) may be directly focally marked, and that the choice of focal marking is governed by blocking.
  • Barak, L., Harmon, Z., Feldman, N. H., Edwards, J., & Shafto, P. (2023). When children's production deviates from observed input: Modeling the variable production of the English past tense. Cognitive Science, 47(8): e13328. doi:10.1111/cogs.13328.

    Abstract

    As children gradually master grammatical rules, they often go through a period of producing form-meaning associations that were not observed in the input. For example, 2- to 3-year-old English-learning children use the bare form of verbs in settings that require obligatory past tense meaning while already starting to produce the grammatical –ed inflection. While many studies have focused on overgeneralization errors, fewer studies have attempted to explain the root of this earlier stage of rule acquisition. In this work, we use computational modeling to replicate children's production behavior prior to the generalization of past tense production in English. We illustrate how seemingly erroneous productions emerge in a model, without being licensed in the grammar and despite the model aiming at conforming to grammatical forms. Our results show that bare form productions stem from a tension between two factors: (1) trying to produce a less frequent meaning (the past tense) and (2) being unable to restrict the production of frequent forms (the bare form) as learning progresses. Like children, our model goes through a stage of bare form production and then converges on adult-like production of the regular past tense, showing that these different stages can be accounted for through a single learning mechanism.
  • Barendse, M. T., & Rosseel, Y. (2023). Multilevel SEM with random slopes in discrete data using the pairwise maximum likelihood. British Journal of Mathematical and Statistical Psychology, 76(2), 327-352. doi:10.1111/bmsp.12294.

    Abstract

    Pairwise maximum likelihood (PML) estimation is a promising method for multilevel models with discrete responses. Multilevel models take into account that units within a cluster tend to be more alike than units from different clusters. The pairwise likelihood is then obtained as the product of bivariate likelihoods for all within-cluster pairs of units and items. In this study, we investigate the PML estimation method with computationally intensive multilevel random intercept and random slope structural equation models (SEM) in discrete data. In pursuing this, we first reconsidered the general ‘wide format’ (WF) approach for SEM models and then extend the WF approach with random slopes. In a small simulation study we the determine accuracy and efficiency of the PML estimation method by varying the sample size (250, 500, 1000, 2000), response scales (two-point, four-point), and data-generating model (mediation model with three random slopes, factor model with one and two random slopes). Overall, results show that the PML estimation method is capable of estimating computationally intensive random intercept and random slopes multilevel models in the SEM framework with discrete data and many (six or more) latent variables with satisfactory accuracy and efficiency. However, the condition with 250 clusters combined with a two-point response scale shows more bias.

    Additional information

    figures
  • Barrios, A., & Garcia, R. (2023). Filipino children’s acquisition of nominal and verbal markers in L1 and L2 Tagalog. Languages, 8(3): 188. doi:10.3390/languages8030188.

    Abstract

    Western Austronesian languages, like Tagalog, have unique, complex voice systems that require the correct combinations of verbal and nominal markers, raising many questions about their learnability. In this article, we review the experimental and observational studies on both the L1 and L2 acquisition of Tagalog. The reviewed studies reveal error patterns that reflect the complex nature of the Tagalog voice system. The main goal of the article is to present a full picture of commission errors in young Filipino children’s expression of causation and agency in Tagalog by describing patterns of nominal marking and voice marking in L1 Tagalog and L2 Tagalog. It also aims to provide an overview of existing research, as well as characterize research on nominal and verbal acquisition, specifically in terms of research problems, data sources, and methodology. Additionally, we discuss the research gaps in at least fifty years’ worth of studies in the area from the 1960’s to the present, as well as ideas for future research to advance the state of the art.
  • Bartolozzi, F. (2023). Repetita Iuvant? Studies on the role of repetition priming as a supportive mechanism during conversation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bastiaanse, R., & Ohlerth, A.-K. (2023). Presurgical language mapping: What are we testing? Journal of Personalized Medicine, 13: 376. doi:10.3390/jpm13030376.

    Abstract

    Gliomas are brain tumors infiltrating healthy cortical and subcortical areas that may host cognitive functions, such as language. If these areas are damaged during surgery, the patient might develop word retrieval or articulation problems. For this reason, many glioma patients are operated on awake, while their language functions are tested. For this practice, quite simple tests are used, for example, picture naming. This paper describes the process and timeline of picture naming (noun retrieval) and shows the timeline and localization of the distinguished stages. This is relevant information for presurgical language testing with navigated Magnetic Stimulation (nTMS). This novel technique allows us to identify cortical involved in the language production process and, thus, guides the neurosurgeon in how to approach and remove the tumor. We argue that not only nouns, but also verbs should be tested, since sentences are built around verbs, and sentences are what we use in daily life. This approach’s relevance is illustrated by two case studies of glioma patients.
  • Bauer, B. L. M. (2023). Multiplication, addition, and subtraction in numerals: Formal variation in Latin’s decads+ from an Indo-European perspective. Journal of Latin Linguistics, 22(1), 1-56. doi:10.1515/joll-2023-2001.

    Abstract

    While formal variation in Latin’s numerals is generally acknowledged, little is known about (relative) incidence, distribution, context, or linguistic productivity. Addressing this lacuna, this article examines “decads+” in Latin, which convey the numbers between the full decads: the teens (‘eleven’ through ‘nineteen’) as well as the numerals between the higher decads starting at ‘twenty-one’ through ‘ninety-nine’. Latin’s decads+ are compounds and prone to variation. The data, which are drawn from a variety of sources, reveal (a) substantial formal variation in Latin, both internally and typologically; (b) co-existence of several types of formation; (c) productivity of potential borrowings; (d) resilience of early formations; (e) patterns in structure and incidence that anticipate the Romance numerals; and (f) historical trends. From a typological and general linguistic perspective as well, Latin’s decads+ are most relevant because their formal variation involves sequence, connector, and arithmetical operations and because their historical depth shows a gradual shift away from widespread formal variation, eventually resulting in the relatively rigid system found in Romance. Moreover, the combined system attested in decads+ in Latin – based on a combination of inherited, innovative and borrowed patterns and reflecting different stages of development – presents a number of typological inconsistencies that require further assessment

    Files private

    Request files
  • Bayram, F., Kubota, M., & Soares, S. M. P. (2024). Editorial: The next phase in heritage language studies: methodological considerations and advancements. Frontiers in Psychology, 15: 1392474. doi:10.3389/fpsyg.2024.1392474.
  • Bazzi, L., Brouwer, S., Khan, Z. N., Verdonschot, R. G., & Foucart, A. (2024). War feels less horrid in a foreign accent: Exploring the impact of the foreign accent on emotionality. Frontiers in Language Sciences, 3: 1357828. doi:10.3389/flang.2024.1357828.

    Abstract

    Introduction: The processing of a foreign accent is known to increase cognitive load for the native listener, establish psychological distance with the foreign-accented speaker, and even influence decision-making. Similarly, research in the field of emotional processing indicates that a foreign accent may impact the native listener's emotionality. Taking these aspects into consideration, the current study aimed to confirm the hypothesis that a foreign accent, compared to a native accent, significantly affects the processing of affective-laden words.

    Methods: In order to test this hypothesis, native Spanish speakers participated in an online experiment in which they rated on a Likert scale the valence and arousal of positive, neutral and negative words presented in native and foreign accents.

    Results: Results confirm a foreign accent effect on emotional processing whereby positively valenced words are perceived as less positive and negatively valenced words as less negative when processed in a foreign accent compared to a native accent. Moreover, the arousal provoked by emotion words is lesser when words are processed in a foreign than a native accent.

    Discussion: We propose possible, not mutually exclusive, explanations for the effect based on linguistic fluency, language attitudes and the linguistic context of language acquisition. Although further research is needed to confirm them, these explanations may be relevant for models of language comprehension and language learning. The observation of a reduction in emotionality resulting from a foreign accent is important for society as important decisions are made by representatives with diverse language and accent backgrounds. Our findings demonstrate that the choice of the language, which entails speaking in a native or a foreign accent, can be crucial when discussing topics such as the consequences of wars, pandemics, or natural disasters on human beings.

    Additional information

    data sheet
  • Benetti, S., Ferrari, A., & Pavani, F. (2023). Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Frontiers in Human Neuroscience, 17: 1108354. doi:10.3389/fnhum.2023.1108354.

    Abstract

    In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective (“lateral processing pathway”). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
  • Bergelson, E., Soderstrom, M., Schwarz, I.-C., Rowland, C. F., Ramírez-Esparza, N., Rague Hamrick, L., Marklund, E., Kalashnikova, M., Guez, A., Casillas, M., Benetti, L., Van Alphen, P. M., & Cristia, A. (2023). Everyday language input and production in 1,001 children from six continents. Proceedings of the National Academy of Sciences of the United States of America, 120(52): 2300671120. doi:10.1073/pnas.2300671120.

    Abstract

    Language is a universal human ability, acquired readily by young children, whootherwise struggle with many basics of survival. And yet, language ability is variableacross individuals. Naturalistic and experimental observations suggest that children’slinguistic skills vary with factors like socioeconomic status and children’s gender.But which factors really influence children’s day-to-day language use? Here, weleverage speech technology in a big-data approach to report on a unique cross-culturaland diverse data set: >2,500 d-long, child-centered audio-recordings of 1,001 2- to48-mo-olds from 12 countries spanning six continents across urban, farmer-forager,and subsistence-farming contexts. As expected, age and language-relevant clinical risksand diagnoses predicted how much speech (and speech-like vocalization) childrenproduced. Critically, so too did adult talk in children’s environments: Children whoheard more talk from adults produced more speech. In contrast to previous conclusionsbased on more limited sampling methods and a different set of language proxies,socioeconomic status (operationalized as maternal education) was not significantlyassociated with children’s productions over the first 4 y of life, and neither weregender or multilingualism. These findings from large-scale naturalistic data advanceour understanding of which factors are robust predictors of variability in the speechbehaviors of young learners in a wide range of everyday contexts
  • Bianco, R., Zuk, N. J., Bigand, F., Quarta, E., Grasso, S., Arnese, F., Ravignani, A., Battaglia-Mayer, A., & Novembre, G. (2024). Neural encoding of musical expectations in a non-human primate. Current Biology, 34(2), 444-450. doi:10.1016/j.cub.2023.12.019.

    Abstract

    The appreciation of music is a universal trait of humankind.1,2,3 Evidence supporting this notion includes the ubiquity of music across cultures4,5,6,7 and the natural predisposition toward music that humans display early in development.8,9,10 Are we musical animals because of species-specific predispositions? This question cannot be answered by relying on cross-cultural or developmental studies alone, as these cannot rule out enculturation.11 Instead, it calls for cross-species experiments testing whether homologous neural mechanisms underlying music perception are present in non-human primates. We present music to two rhesus monkeys, reared without musical exposure, while recording electroencephalography (EEG) and pupillometry. Monkeys exhibit higher engagement and neural encoding of expectations based on the previously seeded musical context when passively listening to real music as opposed to shuffled controls. We then compare human and monkey neural responses to the same stimuli and find a species-dependent contribution of two fundamental musical features—pitch and timing12—in generating expectations: while timing- and pitch-based expectations13 are similarly weighted in humans, monkeys rely on timing rather than pitch. Together, these results shed light on the phylogeny of music perception. They highlight monkeys’ capacity for processing temporal structures beyond plain acoustic processing, and they identify a species-dependent contribution of time- and pitch-related features to the neural encoding of musical expectations.
  • Bignardi, G., Smit, D. J. A., Vessel, E. A., Trupp, M. D., Ticini, L. F., Fisher, S. E., & Polderman, T. J. C. (2024). Genetic effects on variability in visual aesthetic evaluations are partially shared across visual domains. Communications Biology, 7: 55. doi:10.1038/s42003-023-05710-4.

    Abstract

    The aesthetic values that individuals place on visual images are formed and shaped over a lifetime. However, whether the formation of visual aesthetic value is solely influenced by environmental exposure is still a matter of debate. Here, we considered differences in aesthetic value emerging across three visual domains: abstract images, scenes, and faces. We examined variability in two major dimensions of ordinary aesthetic experiences: taste-typicality and evaluation-bias. We build on two samples from the Australian Twin Registry where 1547 and 1231 monozygotic and dizygotic twins originally rated visual images belonging to the three domains. Genetic influences explained 26% to 41% of the variance in taste-typicality and evaluation-bias. Multivariate analyses showed that genetic effects were partially shared across visual domains. Results indicate that the heritability of major dimensions of aesthetic evaluations is comparable to that of other complex social traits, albeit lower than for other complex cognitive traits. The exception was taste-typicality for abstract images, for which we found only shared and unique environmental influences. Our study reveals that diverse sources of genetic and environmental variation influence the formation of aesthetic value across distinct visual domains and provides improved metrics to assess inter-individual differences in aesthetic value.

    Additional information

    supplementary information
  • Boen, R., Kaufmann, T., Van der Meer, D., Frei, O., Agartz, I., Ames, D., Andersson, M., Armstrong, N. J., Artiges, E., Atkins, J. R., Bauer, J., Benedetti, F., Boomsma, D. I., Brodaty, H., Brosch, K., Buckner, R. L., Cairns, M. J., Calhoun, V., Caspers, S., Cichon, S. and 96 moreBoen, R., Kaufmann, T., Van der Meer, D., Frei, O., Agartz, I., Ames, D., Andersson, M., Armstrong, N. J., Artiges, E., Atkins, J. R., Bauer, J., Benedetti, F., Boomsma, D. I., Brodaty, H., Brosch, K., Buckner, R. L., Cairns, M. J., Calhoun, V., Caspers, S., Cichon, S., Corvin, A. P., Crespo Facorro, B., Dannlowski, U., David, F. S., De Geus, E. J., De Zubicaray, G. I., Desrivières, S., Doherty, J. L., Donohoe, G., Ehrlich, S., Eising, E., Espeseth, T., Fisher, S. E., Forstner, A. J., Fortaner Uyà, L., Frouin, V., Fukunaga, M., Ge, T., Glahn, D. C., Goltermann, J., Grabe, H. J., Green, M. J., Groenewold, N. A., Grotegerd, D., Hahn, T., Hashimoto, R., Hehir-Kwa, J. Y., Henskens, F. A., Holmes, A. J., Haberg, A. K., Haavik, J., Jacquemont, S., Jansen, A., Jockwitz, C., Jonsson, E. G., Kikuchi, M., Kircher, T., Kumar, K., Le Hellard, S., Leu, C., Linden, D. E., Liu, J., Loughnan, R., Mather, K. A., McMahon, K. L., McRae, A. F., Medland, S. E., Meinert, S., Moreau, C. A., Morris, D. W., Mowry, B. J., Muhleisen, T. W., Nenadić, I., Nöthen, M. M., Nyberg, L., Owen, M. J., Paolini, M., Paus, T., Pausova, Z., Persson, K., Quidé, Y., Reis Marques, T., Sachdev, P. S., Sando, S. B., Schall, U., Scott, R. J., Selbæk, G., Shumskaya, E., Silva, A. I., Sisodiya, S. M., Stein, F., Stein, D. J., Straube, B., Streit, F., Strike, L. T., Teumer, A., Teutenberg, L., Thalamuthu, A., Tooney, P. A., Tordesillas-Gutierrez, D., Trollor, J. N., Van 't Ent, D., Van den Bree, M. B. M., Van Haren, N. E. M., Vazquez-Bourgon, J., Volzke, H., Wen, W., Wittfeld, K., Ching, C. R., Westlye, L. T., Thompson, P. M., Bearden, C. E., Selmer, K. K., Alnæs, D., Andreassen, O. A., & Sonderby, I. E. (2024). Beyond the global brain differences: Intra-individual variability differences in 1q21.1 distal and 15q11.2 BP1-BP2 deletion carriers. Biological Psychiatry, 95(2), 147-160. doi:10.1016/j.biopsych.2023.08.018.

    Abstract

    Background

    The 1q21.1 distal and 15q11.2 BP1-BP2 CNVs exhibit regional and global brain differences compared to non-carriers. However, interpreting regional differences is challenging if a global difference drives the regional brain differences. Intra-individual variability measures can be used to test for regional differences beyond global differences in brain structure.

    Methods

    Magnetic resonance imaging data were used to obtain regional brain values for 1q21.1 distal deletion (n=30) and duplication (n=27), and 15q11.2 BP1-BP2 deletion (n=170) and duplication (n=243) carriers and matched non-carriers (n=2,350). Regional intra-deviation (RID) scores i.e., the standardized difference between an individual’s regional difference and global difference, were used to test for regional differences that diverge from the global difference.

    Results

    For the 1q21.1 distal deletion carriers, cortical surface area for regions in the medial visual cortex, posterior cingulate and temporal pole differed less, and regions in the prefrontal and superior temporal cortex differed more than the global difference in cortical surface area. For the 15q11.2 BP1-BP2 deletion carriers, cortical thickness in regions in the medial visual cortex, auditory cortex and temporal pole differed less, and the prefrontal and somatosensory cortex differed more than the global difference in cortical thickness.

    Conclusion

    We find evidence for regional effects beyond differences in global brain measures in 1q21.1 distal and 15q11.2 BP1-BP2 CNVs. The results provide new insight into brain profiling of the 1q21.1 distal and 15q11.2 BP1-BP2 CNVs, with the potential to increase our understanding of mechanisms involved in altered neurodevelopment.

    Additional information

    supplementary material
  • Bögels, S., & Levinson, S. C. (2023). Ultrasound measurements of interactive turn-taking in question-answer sequences: Articulatory preparation is delayed but not tied to the response. PLoS One, 18: e0276470. doi:10.1371/journal.pone.0276470.

    Abstract

    We know that speech planning in conversational turn-taking can happen in overlap with the previous turn and research suggests that it starts as early as possible, that is, as soon as the gist of the previous turn becomes clear. The present study aimed to investigate whether planning proceeds all the way up to the last stage of articulatory preparation (i.e., putting the articulators in place for the first phoneme of the response) and what the timing of this process is. Participants answered pre-recorded quiz questions (being under the illusion that they were asked live), while their tongue movements were measured using ultrasound. Planning could start early for some quiz questions (i.e., midway during the question), but late for others (i.e., only at the end of the question). The results showed no evidence for a difference between tongue movements in these two types of questions for at least two seconds after planning could start in early-planning questions, suggesting that speech planning in overlap with the current turn proceeds more slowly than in the clear. On the other hand, when time-locking to speech onset, tongue movements differed between the two conditions from up to two seconds before this point. This suggests that articulatory preparation can occur in advance and is not fully tied to the overt response itself.

    Additional information

    supporting information
  • Wu, M., Bosker, H. R., & Riecke, L. (2023). Sentential contextual facilitation of auditory word processing builds up during sentence tracking. Journal of Cognitive Neuroscience, 35(8), 1262 -1278. doi:10.1162/jocn_a_02007.

    Abstract

    While listening to meaningful speech, auditory input is processed more rapidly near the end (vs. beginning) of sentences. Although several studies have shown such word-to-word changes in auditory input processing, it is still unclear from which processing level these word-to-word dynamics originate. We investigated whether predictions derived from sentential context can result in auditory word-processing dynamics during sentence tracking. We presented healthy human participants with auditory stimuli consisting of word sequences, arranged into either predictable (coherent sentences) or less predictable (unstructured, random word sequences) 42-Hz amplitude-modulated speech, and a continuous 25-Hz amplitude-modulated distractor tone. We recorded RTs and frequency-tagged neuroelectric responses 1(auditory steady-state responses) to individual words at multiple temporal positions within the sentences, and quantified sentential context effects at each position while controlling for individual word characteristics (i.e., phonetics, frequency, and familiarity). We found that sentential context increasingly facilitates auditory word processing as evidenced by accelerated RTs and increased auditory steady-state responses to later-occurring words within sentences. These purely top–down contextually driven auditory word-processing dynamics occurred only when listeners focused their attention on the speech and did not transfer to the auditory processing of the concurrent distractor tone. These findings indicate that auditory word-processing dynamics during sentence tracking can originate from sentential predictions. The predictions depend on the listeners' attention to the speech, and affect only the processing of the parsed speech, not that of concurrently presented auditory streams.
  • Bruggeman, L., & Cutler, A. (2023). Listening like a native: Unprofitable procedures need to be discarded. Bilingualism: Language and Cognition, 26(5), 1093-1102. doi:10.1017/S1366728923000305.

    Abstract

    Two languages, historically related, both have lexical stress, with word stress distinctions signalled in each by the same suprasegmental cues. In each language, words can overlap segmentally but differ in placement of primary versus secondary stress (OCtopus, ocTOber). However, secondary stress occurs more often in the words of one language, Dutch, than in the other, English, and largely because of this, Dutch listeners find it helpful to use suprasegmental stress cues when recognising spoken words. English listeners, in contrast, do not; indeed, Dutch listeners can outdo English listeners in correctly identifying the source words of English word fragments (oc-). Here we show that Dutch-native listeners who reside in an English-speaking environment and have become dominant in English, though still maintaining their use of these stress cues in their L1, ignore the same cues in their L2 English, performing as poorly in the fragment identification task as the L1 English do.
  • Bulut, T. (2023). Domain‐general and domain‐specific functional networks of Broca's area underlying language processing. Brain and Behavior, 13(7): e3046. doi:10.1002/brb3.3046.

    Abstract

    Introduction
    Despite abundant research on the role of Broca's area in language processing, there is still no consensus on language specificity of this region and its connectivity network.

    Methods
    The present study employed the meta-analytic connectivity modeling procedure to identify and compare domain-specific (language-specific) and domain-general (shared between language and other domains) functional connectivity patterns of three subdivisions within the broadly defined Broca's area: pars opercularis (IFGop), pars triangularis (IFGtri), and pars orbitalis (IFGorb) of the left inferior frontal gyrus.

    Results
    The findings revealed a left-lateralized frontotemporal network for all regions of interest underlying domain-specific linguistic functions. The domain-general network, however, spanned frontoparietal regions that overlap with the multiple-demand network and subcortical regions spanning the thalamus and the basal ganglia.

    Conclusions
    The findings suggest that language specificity of Broca's area emerges within a left-lateralized frontotemporal network, and that domain-general resources are garnered from frontoparietal and subcortical networks when required by task demands.

    Additional information

    Supporting Information Data availability
  • Bulut, T., & Hagoort, P. (2024). Contributions of the left and right thalami to language: A meta-analytic approach. Brain Structure & Function. Advance online publication. doi:10.1007/s00429-024-02795-3.

    Abstract

    Background: Despite a pervasive cortico-centric view in cognitive neuroscience, subcortical structures including the thalamus have been shown to be increasingly involved in higher cognitive functions. Previous structural and functional imaging studies demonstrated cortico-thalamo-cortical loops which may support various cognitive functions including language. However, large-scale functional connectivity of the thalamus during language tasks has not been examined before. Methods: The present study employed meta-analytic connectivity modeling to identify language-related coactivation patterns of the left and right thalami. The left and right thalami were used as regions of interest to search the BrainMap functional database for neuroimaging experiments with healthy participants reporting language-related activations in each region of interest. Activation likelihood estimation analyses were then carried out on the foci extracted from the identified studies to estimate functional convergence for each thalamus. A functional decoding analysis based on the same database was conducted to characterize thalamic contributions to different language functions. Results: The results revealed bilateral frontotemporal and bilateral subcortical (basal ganglia) coactivation patterns for both the left and right thalami, and also right cerebellar coactivations for the left thalamus, during language processing. In light of previous empirical studies and theoretical frameworks, the present connectivity and functional decoding findings suggest that cortico-subcortical-cerebellar-cortical loops modulate and fine-tune information transfer within the bilateral frontotemporal cortices during language processing, especially during production and semantic operations, but also other language (e.g., syntax, phonology) and cognitive operations (e.g., attention, cognitive control). Conclusion: The current findings show that the language-relevant network extends beyond the classical left perisylvian cortices and spans bilateral cortical, bilateral subcortical (bilateral thalamus, bilateral basal ganglia) and right cerebellar regions.

    Additional information

    supplementary information
  • Byun, K.-S. (2023). Establishing intersubjectivity in cross-signing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Cabrelli, J., Chaouch-Orozco, A., González Alonso, J., Pereira Soares, S. M., Puig-Mayenco, E., & Rothman, J. (Eds.). (2023). The Cambridge handbook of third language acquisition. Cambridge: Cambridge University Press. doi:10.1017/9781108957823.
  • Cabrelli, J., Chaouch-Orozco, A., González Alonso, J., Pereira Soares, S. M., Puig-Mayenco, E., & Rothman, J. (2023). Introduction - Multilingualism: Language, brain, and cognition. In J. Cabrelli, A. Chaouch-Orozco, J. González Alonso, S. M. Pereira Soares, E. Puig-Mayenco, & J. Rothman (Eds.), The Cambridge handbook of third language acquisition (pp. 1-20). Cambridge: Cambridge University Press. doi:10.1017/9781108957823.001.

    Abstract

    This chapter provides an introduction to the handbook. It succintly overviews the key questions in the field of L3/Ln acquisition and summarizes the scope of all the chapters included. The chapter ends by raising some outstanding questions that the field needs to address.
  • Caplan, S., Peng, M. Z., Zhang, Y., & Yu, C. (2023). Using an Egocentric Human Simulation Paradigm to quantify referential and semantic ambiguity in early word learning. In M. Goldwater, F. K. Anggoro, B. K. Hayes, & D. C. Ong (Eds.), Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023) (pp. 1043-1049).

    Abstract

    In order to understand early word learning we need to better understand and quantify properties of the input that young children receive. We extended the human simulation paradigm (HSP) using egocentric videos taken from infant head-mounted cameras. The videos were further annotated with gaze information indicating in-the-moment visual attention from the infant. Our new HSP prompted participants for two types of responses, thus differentiating referential from semantic ambiguity in the learning input. Consistent with findings on visual attention in word learning, we find a strongly bimodal distribution over HSP accuracy. Even in this open-ended task, most videos only lead to a small handful of common responses. What's more, referential ambiguity was the key bottleneck to performance: participants can nearly always recover the exact word that was said if they identify the correct referent. Finally, analysis shows that adult learners relied on particular, multimodal behavioral cues to infer those target referents.
  • Carota, F., Nili, H., Kriegeskorte, N., & Pulvermüller, F. (2023). Experientially-grounded and distributional semantic vectors uncover dissociable representations of semantic categories. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2023.2232481.

    Abstract

    Neuronal populations code similar concepts by similar activity patterns across the human brain's semantic networks. However, it is unclear to what extent such meaning-to-symbol mapping reflects distributional statistics, or experiential information grounded in sensorimotor and emotional knowledge. We asked whether integrating distributional and experiential data better distinguished conceptual categories than each method taken separately. We examined the similarity structure of fMRI patterns elicited by visually presented action- and object-related words using representational similarity analysis (RSA). We found that the distributional and experiential/integrative models respectively mapped the high-dimensional semantic space in left inferior frontal, anterior temporal, and in left precentral, posterior inferior/middle temporal cortex. Furthermore, results from model comparisons uncovered category-specific similarity patterns, as both distributional and experiential models matched the similarity patterns for action concepts in left fronto-temporal cortex, whilst the experiential/integrative (but not distributional) models matched the similarity patterns for object concepts in left fusiform and angular gyrus.
  • Carota, F., Schoffelen, J.-M., Oostenveld, R., & Indefrey, P. (2023). Parallel or sequential? Decoding conceptual and phonological/phonetic information from MEG signals during language production. Cognitive Neuropsychology, 40(5-6), 298-317. doi:10.1080/02643294.2023.2283239.

    Abstract

    Speaking requires the temporally coordinated planning of core linguistic information, from conceptual meaning to articulation. Recent neurophysiological results suggested that these operations involve a cascade of neural events with subsequent onset times, whilst competing evidence suggests early parallel neural activation. To test these hypotheses, we examined the sources of neuromagnetic activity recorded from 34 participants overtly naming 134 images from 4 object categories (animals, tools, foods and clothes). Within each category, word length and phonological neighbourhood density were co-varied to target phonological/phonetic processes. Multivariate pattern analyses (MVPA) searchlights in source space decoded object categories in occipitotemporal and middle temporal cortex, and phonological/phonetic variables in left inferior frontal (BA 44) and motor cortex early on. The findings suggest early activation of multiple variables due to intercorrelated properties and interactivity of processing, thus raising important questions about the representational properties of target words during the preparatory time enabling overt speaking.
  • Casillas, M., Foushee, R., Méndez Girón, J., Polian, G., & Brown, P. (2024). Little evidence for a noun bias in Tseltal spontaneous speech. First Language. Advance online publication. doi:10.1177/01427237231216571.

    Abstract

    This study examines whether children acquiring Tseltal (Mayan) demonstrate a noun bias – an overrepresentation of nouns in their early vocabularies. Nouns, specifically concrete and animate nouns, are argued to universally predominate in children’s early vocabularies because their referents are naturally available as bounded concepts to which linguistic labels can be mapped. This early advantage for noun learning has been documented using multiple methods and across a diverse collection of language populations. However, past evidence bearing on a noun bias in Tseltal learners has been mixed. Tseltal grammatical features and child–caregiver interactional patterns dampen the salience of nouns and heighten the salience of verbs, leading to the prediction of a diminished noun bias and perhaps even an early predominance of verbs. We here analyze the use of noun and verb stems in children’s spontaneous speech from egocentric daylong recordings of 29 Tseltal learners between 0;9 and 4;4. We find weak to no evidence for a noun bias using two separate analytical approaches on the same data; one analysis yields a preliminary suggestion of a flipped outcome (i.e. a verb bias). We discuss the implications of these findings for broader theories of learning bias in early lexical development.
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2023). Ten-month-old infants’ neural tracking of naturalistic speech is not facilitated by the speaker’s eye gaze. Developmental Cognitive Neuroscience, 64: 101297. doi:10.1016/j.dcn.2023.101297.

    Abstract

    Eye gaze is a powerful ostensive cue in infant-caregiver interactions, with demonstrable effects on language acquisition. While the link between gaze following and later vocabulary is well-established, the effects of eye gaze on other aspects of language, such as speech processing, are less clear. In this EEG study, we examined the effects of the speaker’s eye gaze on ten-month-old infants’ neural tracking of naturalistic audiovisual speech, a marker for successful speech processing. Infants watched videos of a speaker telling stories, addressing the infant with direct or averted eye gaze. We assessed infants’ speech-brain coherence at stress (1–1.75 Hz) and syllable (2.5–3.5 Hz) rates, tested for differences in attention by comparing looking times and EEG theta power in the two conditions, and investigated whether neural tracking predicts later vocabulary. Our results showed that infants’ brains tracked the speech rhythm both at the stress and syllable rates, and that infants’ neural tracking at the syllable rate predicted later vocabulary. However, speech-brain coherence did not significantly differ between direct and averted gaze conditions and infants did not show greater attention to direct gaze. Overall, our results suggest significant neural tracking at ten months, related to vocabulary development, but not modulated by speaker’s gaze.

    Additional information

    supplementary material
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2024). Does the speaker’s eye gaze facilitate infants’ word segmentation from continuous speech? An ERP study. Developmental Science, 27(2): e13436. doi:10.1111/desc.13436.

    Abstract

    The environment in which infants learn language is multimodal and rich with social cues. Yet, the effects of such cues, such as eye contact, on early speech perception have not been closely examined. This study assessed the role of ostensive speech, signalled through the speaker's eye gaze direction, on infants’ word segmentation abilities. A familiarisation-then-test paradigm was used while electroencephalography (EEG) was recorded. Ten-month-old Dutch-learning infants were familiarised with audio-visual stories in which a speaker recited four sentences with one repeated target word. The speaker addressed them either with direct or with averted gaze while speaking. In the test phase following each story, infants heard familiar and novel words presented via audio-only. Infants’ familiarity with the words was assessed using event-related potentials (ERPs). As predicted, infants showed a negative-going ERP familiarity effect to the isolated familiarised words relative to the novel words over the left-frontal region of interest during the test phase. While the word familiarity effect did not differ as a function of the speaker's gaze over the left-frontal region of interest, there was also a (not predicted) positive-going early ERP familiarity effect over right fronto-central and central electrodes in the direct gaze condition only. This study provides electrophysiological evidence that infants can segment words from audio-visual speech, regardless of the ostensiveness of the speaker's communication. However, the speaker's gaze direction seems to influence the processing of familiar words.
  • Chang, F., Tatsumi, T., Hiranuma, Y., & Bannard, C. (2023). Visual heuristics for verb production: Testing a deep‐learning model with experiments in Japanese. Cognitive Science, 47(8): e13324. doi:10.1111/cogs.13324.

    Abstract

    Tense/aspect morphology on verbs is often thought to depend on event features like telicity, but it is not known how speakers identify these features in visual scenes. To examine this question, we asked Japanese speakers to describe computer-generated animations of simple actions with variation in visual features related to telicity. Experiments with adults and children found that they could use goal information in the animations to select appropriate past and progressive verb forms. They also produced a large number of different verb forms. To explain these findings, a deep-learning model of verb production from visual input was created that could produce a human-like distribution of verb forms. It was able to use visual cues to select appropriate tense/aspect morphology. The model predicted that video duration would be related to verb complexity, and past tense production would increase when it received the endpoint as input. These predictions were confirmed in a third study with Japanese adults. This work suggests that verb production could be tightly linked to visual heuristics that support the understanding of events.
  • Chen, A., Çetinçelik, M., Roncaglia-Denissen, M. P., & Sadakata, M. (2023). Native language, L2 experience, and pitch processing in music. Linguistic Approaches to Bilingualism, 13(2), 218-237. doi:10.1075/lab.20030.che.

    Abstract

    The current study investigated how the role of pitch in one’s native language and L2 experience influenced musical melodic processing by testing Turkish and Mandarin Chinese advanced and beginning learners of English as an L2. Pitch has a lower functional load and shows a simpler pattern in Turkish than in Chinese as the former only contrasts between presence and the absence of pitch elevation, while the latter makes use of four different pitch contours lexically. Using the Musical Ear Test as the tool, we found that the Chinese listeners outperformed the Turkish listeners, and the advanced L2 learners outperformed the beginning learners. The Turkish listeners were further tested on their discrimination of bisyllabic Chinese lexical tones, and again an L2 advantage was observed. No significant difference was found for working memory between the beginning and advanced L2 learners. These results suggest that richness of tonal inventory of the native language is essential for triggering a music processing advantage, and on top of the tone language advantage, the L2 experience yields a further enhancement. Yet, unlike the tone language advantage that seems to relate to pitch expertise, learning an L2 seems to improve sound discrimination in general, and such improvement exhibits in non-native lexical tone discrimination.
  • Chevrefils, L., Morgenstern, A., Beaupoil-Hourdel, P., Bedoin, D., Caët, S., Danet, C., Danino, C., De Pontonx, S., & Parisse, C. (2023). Coordinating eating and languaging: The choreography of speech, sign, gesture and action in family dinners. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527183.

    Abstract

    In this study, we analyze one French signing and one French speaking family’s interaction during dinner. The families composed of two parents and two children aged 3 to 11 were filmed with three cameras to capture all family members’ behaviors. The three videos per dinner were synchronized and coded on ELAN. We annotated all participants’ acting, and languaging.
    Our quantitative analyses show how family members collaboratively manage multiple streams of activity through the embodied performances of dining and interacting. We uncover different profiles according to participants’ modality of expression and status (focusing on the mother and the younger child). The hearing participants’ co-activity management illustrates their monitoring of dining and conversing and how they progressively master the affordances of the visual and vocal channels to maintain the simultaneity of the two activities. The deaf mother skillfully manages to alternate smoothly between dining and interacting. The deaf younger child manifests how she is in the process of developing her skills to manage multi-activity. Our qualitative analyses focus on the ecology of visual-gestural and audio-vocal languaging in the context of co-activity according to language and participant. We open new perspectives on the management of gaze and body parts in multimodal languaging.
  • Clough, S., Morrow, E., Mutlu, B., Turkstra, L., & Duff, M. C. C. (2023). Emotion recognition of faces and emoji in individuals with moderate-severe traumatic brain injury. Brain Injury, 37(7), 596-610. doi:10.1080/02699052.2023.2181401.

    Abstract

    Background. Facial emotion recognition deficits are common after moderate-severe traumatic brain injury (TBI) and linked to poor social outcomes. We examine whether emotion recognition deficits extend to facial expressions depicted by emoji.
    Methods. Fifty-one individuals with moderate-severe TBI (25 female) and fifty-one neurotypical peers (26 female) viewed photos of human faces and emoji. Participants selected the best-fitting label from a set of basic emotions (anger, disgust, fear, sadness, neutral, surprise, happy) or social emotions (embarrassed, remorseful, anxious, neutral, flirting, confident, proud).
    Results. We analyzed the likelihood of correctly labeling an emotion by group (neurotypical, TBI), stimulus condition (basic faces, basic emoji, social emoji), sex (female, male), and their interactions. Participants with TBI did not significantly differ from neurotypical peers in overall emotion labeling accuracy. Both groups had poorer labeling accuracy for emoji compared to faces. Participants with TBI (but not neurotypical peers) had poorer accuracy for labeling social emotions depicted by emoji compared to basic emotions depicted by emoji. There were no effects of participant sex.
    Discussion. Because emotion representation is more ambiguous in emoji than human faces, studying emoji use and perception in TBI is an important consideration for understanding functional communication and social participation after brain injury.
  • Clough, S., Padilla, V.-G., Brown-Schmidt, S., & Duff, M. C. (2023). Intact speech-gesture integration in narrative recall by adults with moderate-severe traumatic brain injury. Neuropsychologia, 189: 108665. doi:10.1016/j.neuropsychologia.2023.108665.

    Abstract

    Purpose

    Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying “He searched for a new recipe” while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays.

    Methods

    60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., “He searched for a new recipe”), a Gesture Match (e.g., “He searched for a new recipe online), or Other (“He looked for a new recipe”). We also examined whether participants produced representative gestures themselves when retelling these details.

    Results

    Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story.

    Conclusion

    We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population.
  • Clough, S., Tanguay, A. F. N., Mutlu, B., Turkstra, L., & Duff, M. C. (2023). How do individuals with and without traumatic brain injury interpret emoji? Similarities and differences in perceived valence, arousal, and emotion representation. Journal of Nonverbal Communication, 47, 489-511. doi:10.1007/s10919-023-00433-w.

    Abstract

    Impaired facial affect recognition is common after traumatic brain injury (TBI) and linked to poor social outcomes. We explored whether perception of emotions depicted by emoji is also impaired after TBI. Fifty participants with TBI and 50 non-injured peers generated free-text labels to describe emotions depicted by emoji and rated their levels of valence and arousal on nine-point rating scales. We compared how the two groups’ valence and arousal ratings were clustered and examined agreement in the words participants used to describe emoji. Hierarchical clustering of affect ratings produced four emoji clusters in the non-injured group and three emoji clusters in the TBI group. Whereas the non-injured group had a strongly positive and a moderately positive cluster, the TBI group had a single positive valence cluster, undifferentiated by arousal. Despite differences in cluster numbers, hierarchical structures of the two groups’ emoji ratings were significantly correlated. Most emoji had high agreement in the words participants with and without TBI used to describe them. Participants with TBI perceived emoji similarly to non-injured peers, used similar words to describe emoji, and rated emoji similarly on the valence dimension. Individuals with TBI showed small differences in perceived arousal for a minority of emoji. Overall, results suggest that basic recognition processes do not explain challenges in computer-mediated communication reported by adults with TBI. Examining perception of emoji in context by people with TBI is an essential next step for advancing our understanding of functional communication in computer-mediated contexts after brain injury.

    Additional information

    supplementary information
  • Collins, J. (2024). Linguistic areas and prehistoric migrations. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Coopmans, C. W. (2023). Triangles in the brain: The role of hierarchical structure in language use. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Coopmans, C. W., Struiksma, M. E., Coopmans, P. H. A., & Chen, A. (2023). Processing of grammatical agreement in the face of variation in lexical stress: A mismatch negativity study. Language and Speech, 66(1), 202-213. doi:10.1177/00238309221098116.

    Abstract

    Previous electroencephalography studies have yielded evidence for automatic processing of syntax and lexical stress. However, these studies looked at both effects in isolation, limiting their generalizability to everyday language comprehension. In the current study, we investigated automatic processing of grammatical agreement in the face of variation in lexical stress. Using an oddball paradigm, we measured the Mismatch Negativity (MMN) in Dutch-speaking participants while they listened to Dutch subject–verb sequences (linguistic context) or acoustically similar sequences in which the subject was replaced by filtered noise (nonlinguistic context). The verb forms differed in the inflectional suffix, rendering the subject–verb sequences grammatically correct or incorrect, and leading to a difference in the stress pattern of the verb forms. We found that the MMNs were modulated in both the linguistic and nonlinguistic condition, suggesting that the processing load induced by variation in lexical stress can hinder early automatic processing of grammatical agreement. However, as the morphological differences between the verb forms correlated with differences in number of syllables, an interpretation in terms of the prosodic structure of the sequences cannot be ruled out. Future research is needed to determine which of these factors (i.e., lexical stress, syllabic structure) most strongly modulate early syntactic processing.

    Additional information

    supplementary material
  • Coopmans, C. W., Mai, A., Slaats, S., Weissbart, H., & Martin, A. E. (2023). What oscillations can do for syntax depends on your theory of structure building. Nature Reviews Neuroscience, 24, 723. doi:10.1038/s41583-023-00734-5.
  • Coopmans, C. W., Kaushik, K., & Martin, A. E. (2023). Hierarchical structure in language and action: A formal comparison. Psychological Review, 130(4), 935-952. doi:10.1037/rev0000429.

    Abstract

    Since the cognitive revolution, language and action have been compared as cognitive systems, with cross-domain convergent views recently gaining renewed interest in biology, neuroscience, and cognitive science. Language and action are both combinatorial systems whose mode of combination has been argued to be hierarchical, combining elements into constituents of increasingly larger size. This structural similarity has led to the suggestion that they rely on shared cognitive and neural resources. In this article, we compare the conceptual and formal properties of hierarchy in language and action using set theory. We show that the strong compositionality of language requires a particular formalism, a magma, to describe the algebraic structure corresponding to the set of hierarchical structures underlying sentences. When this formalism is applied to actions, it appears to be both too strong and too weak. To overcome these limitations, which are related to the weak compositionality and sequential nature of action structures, we formalize the algebraic structure corresponding to the set of actions as a trace monoid. We aim to capture the different system properties of language and action in terms of the distinction between hierarchical sets and hierarchical sequences and discuss the implications for the way both systems could be represented in the brain.
  • Cornelis, S. S., IntHout, J., Runhart, E. H., Grunewald, O., Lin, S., Corradi, Z., Khan, M., Hitti-Malin, R. J., Whelan, L., Farrar, G. J., Sharon, D., Van den Born, L. I., Arno, G., Simcoe, M., Michaelides, M., Webster, A. R., Roosing, S., Mahroo, O. A., Dhaenens, C.-M., Cremers, F. P. M. Cornelis, S. S., IntHout, J., Runhart, E. H., Grunewald, O., Lin, S., Corradi, Z., Khan, M., Hitti-Malin, R. J., Whelan, L., Farrar, G. J., Sharon, D., Van den Born, L. I., Arno, G., Simcoe, M., Michaelides, M., Webster, A. R., Roosing, S., Mahroo, O. A., Dhaenens, C.-M., Cremers, F. P. M., & ABCA4 Study Group (2024). Representation of women among individuals with mild variants in ABCA4-associated retinopathy: A meta-analysis. JAMA Ophthalmology. Advance online publication. doi:10.1001/jamaophthalmol.2024.0660.

    Abstract

    Importance
    Previous studies indicated that female sex might be a modifier in Stargardt disease, which is an ABCA4-associated retinopathy.

    Objective
    To investigate whether women are overrepresented among individuals with ABCA4-associated retinopathy who are carrying at least 1 mild allele or carrying nonmild alleles.

    Data Sources
    Literature data, data from 2 European centers, and a new study. Data from a Radboudumc database and from the Rotterdam Eye Hospital were used for exploratory hypothesis testing.

    Study Selection
    Studies investigating the sex ratio in individuals with ABCA4-AR and data from centers that collected ABCA4 variant and sex data. The literature search was performed on February 1, 2023; data from the centers were from before 2023.

    Data Extraction and Synthesis
    Random-effects meta-analyses were conducted to test whether the proportions of women among individuals with ABCA4-associated retinopathy with mild and nonmild variants differed from 0.5, including subgroup analyses for mild alleles. Sensitivity analyses were performed excluding data with possibly incomplete variant identification. χ2 Tests were conducted to compare the proportions of women in adult-onset autosomal non–ABCA4-associated retinopathy and adult-onset ABCA4-associated retinopathy and to investigate if women with suspected ABCA4-associated retinopathy are more likely to obtain a genetic diagnosis. Data analyses were performed from March to October 2023.

    Main Outcomes and Measures
    Proportion of women per ABCA4-associated retinopathy group. The exploratory testing included sex ratio comparisons for individuals with ABCA4-associated retinopathy vs those with other autosomal retinopathies and for individuals with ABCA4-associated retinopathy who underwent genetic testing vs those who did not.

    Results
    Women were significantly overrepresented in the mild variant group (proportion, 0.59; 95% CI, 0.56-0.62; P < .001) but not in the nonmild variant group (proportion, 0.50; 95% CI, 0.46-0.54; P = .89). Sensitivity analyses confirmed these results. Subgroup analyses on mild variants showed differences in the proportions of women. Furthermore, in the Radboudumc database, the proportion of adult women among individuals with ABCA4-associated retinopathy (652/1154 = 0.56) was 0.10 (95% CI, 0.05-0.15) higher than among individuals with other retinopathies (280/602 = 0.47).

    Conclusions and Relevance
    This meta-analysis supports the likelihood that sex is a modifier in developing ABCA4-associated retinopathy for individuals with a mild ABCA4 allele. This finding may be relevant for prognosis predictions and recurrence risks for individuals with ABCA4-associated retinopathy. Future studies should further investigate whether the overrepresentation of women is caused by differences in the disease mechanism, by differences in health care–seeking behavior, or by health care discrimination between women and men with ABCA4-AR.
  • Corps, R. E., Liao, M., & Pickering, M. J. (2023). Evidence for two stages of prediction in non-native speakers: A visual-world eye-tracking study. Bilingualism: Language and Cognition, 26(1), 231-243. doi:10.1017/S1366728922000499.

    Abstract

    Comprehenders predict what a speaker is likely to say when listening to non-native (L2) and native (L1) utterances. But what are the characteristics of L2 prediction, and how does it relate to L1 prediction? We addressed this question in a visual-world eye-tracking experiment, which tested when L2 English comprehenders integrated perspective into their predictions. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress; distractor: hairdryer) objects. Participants predicted associatively, fixating objects semantically associated with critical verbs (here, the tie and the dress). They also predicted stereotypically consistent objects (e.g., the tie rather than the dress, given the male speaker). Consistent predictions were made later than associative predictions, and were delayed for L2 speakers relative to L1 speakers. These findings suggest prediction involves both automatic and non-automatic stages.
  • Corps, R. E. (2023). What do we know about the mechanisms of response planning in dialog? In Psychology of Learning and Motivation (pp. 41-81). doi:10.1016/bs.plm.2023.02.002.

    Abstract

    During dialog, interlocutors take turns at speaking with little gap or overlap between their contributions. But language production in monolog is comparatively slow. Theories of dialog tend to agree that interlocutors manage these timing demands by planning a response early, before the current speaker reaches the end of their turn. In the first half of this chapter, I review experimental research supporting these theories. But this research also suggests that planning a response early, while simultaneously comprehending, is difficult. Does response planning need to be this difficult during dialog? In other words, is early-planning always necessary? In the second half of this chapter, I discuss research that suggests the answer to this question is no. In particular, corpora of natural conversation demonstrate that speakers do not directly respond to the immediately preceding utterance of their partner—instead, they continue an utterance they produced earlier. This parallel talk likely occurs because speakers are highly incremental and plan only part of their utterance before speaking, leading to pauses, hesitations, and disfluencies. As a result, speakers do not need to engage in extensive advance planning. Thus, laboratory studies do not provide a full picture of language production in dialog, and further research using naturalistic tasks is needed.
  • Corps, R. E., & Pickering, M. (2023). Response planning during question-answering: Does deciding what to say involve deciding how to say it? Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-023-02382-3.

    Abstract

    To answer a question, speakers must determine their response and formulate it in words. But do they decide on a response before formulation, or do they formulate different potential answers before selecting one? We addressed this issue in a verbal question-answering experiment. Participants answered questions more quickly when they had one potential answer (e.g., Which tourist attraction in Paris is very tall?) than when they had multiple potential answers (e.g., What is the name of a Shakespeare play?). Participants also answered more quickly when the set of potential answers were on average short rather than long, regardless of whether there was only one or multiple potential answers. Thus, participants were not affected by the linguistic complexity of unselected but plausible answers. These findings suggest that participants select a single answer before formulation.
  • Corps, R. E., & Meyer, A. S. (2023). Word frequency has similar effects in picture naming and gender decision: A failure to replicate Jescheniak and Levelt (1994). Acta Psychologica, 241: 104073. doi:10.1016/j.actpsy.2023.104073.

    Abstract

    Word frequency plays a key role in theories of lexical access, which assume that the word frequency effect (WFE, faster access to high-frequency than low-frequency words) occurs as a result of differences in the representation and processing of the words. In a seminal paper, Jescheniak and Levelt (1994) proposed that the WFE arises during the retrieval of word forms, rather than the retrieval of their syntactic representations (their lemmas) or articulatory commands. An important part of Jescheniak and Levelt's argument was that they found a stable WFE in a picture naming task, which requires complete lexical access, but not in a gender decision task, which only requires access to the words' lemmas and not their word forms. We report two attempts to replicate this pattern, one with new materials, and one with Jescheniak and Levelt's orginal pictures. In both studies we found a strong WFE when the pictures were shown for the first time, but much weaker effects on their second and third presentation. Importantly these patterns were seen in both the picture naming and the gender decision tasks, suggesting that either word frequency does not exclusively affect word form retrieval, or that the gender decision task does not exclusively tap lemma access.

    Additional information

    raw data and analysis scripts
  • Corps, R. E., Yang, F., & Pickering, M. (2023). Evidence against egocentric prediction during language comprehension. Royal Society Open Science, 10(12): 231252. doi:10.1098/rsos.231252.

    Abstract

    Although previous research has demonstrated that language comprehension can be egocentric, there is little evidence for egocentricity during prediction. In particular, comprehenders do not appear to predict egocentrically when the context makes it clear what the speaker is likely to refer to. But do comprehenders predict egocentrically when the context does not make it clear? We tested this hypothesis using a visual-world eye-tracking paradigm, in which participants heard sentences containing the gender-neutral pronoun They (e.g. They would like to wear…) while viewing four objects (e.g. tie, dress, drill, hairdryer). Two of these objects were plausible targets of the verb (tie and dress), and one was stereotypically compatible with the participant's gender (tie if the participant was male; dress if the participant was female). Participants rapidly fixated targets more than distractors, but there was no evidence that participants ever predicted egocentrically, fixating objects stereotypically compatible with their own gender. These findings suggest that participants do not fall back on their own egocentric perspective when predicting, even when they know that context does not make it clear what the speaker is likely to refer to.
  • Corradi, Z., Khan, M., Hitti-Malin, R., Mishra, K., Whelan, L., Cornelis, S. S., ABCA4-Study Group, Hoyng, C. B., Kämpjärvi, K., Klaver, C. C. W., Liskova, P., Stohr, H., Weber, B. H. F., Banfi, S., Farrar, G. J., Sharon, D., Zernant, J., Allikmets, R., Dhaenens, C.-M., & Cremers, F. P. M. (2023). Targeted sequencing and in vitro splice assays shed light on ABCA4-associated retinopathies missing heritability. Human Genetics and Genomics Advances, 4(4): 100237. doi:10.1016/j.xhgg.2023.100237.

    Abstract

    The ABCA4 gene is the most frequently mutated Mendelian retinopathy-associated gene. Biallelic variants lead to a variety of phenotypes, however, for thousands of cases the underlying variants remain unknown. Here, we aim to shed further light on the missing heritability of ABCA4-associated retinopathy by analyzing a large cohort of macular dystrophy probands. A total of 858 probands were collected from 26 centers, of whom 722 carried no or one pathogenic ABCA4 variant while 136 cases carried two ABCA4 alleles, one of which was a frequent mild variant, suggesting that deep-intronic variants (DIVs) or other cis-modifiers might have been missed. After single molecule molecular inversion probes (smMIPs)-based sequencing of the complete 128-kb ABCA4 locus, the effect of putative splice variants was assessed in vitro by midigene splice assays in HEK293T cells. The breakpoints of copy number variants (CNVs) were determined by junction PCR and Sanger sequencing. ABCA4 sequence analysis solved 207/520 (39.8%) naïve or unsolved cases and 70/202 (34.7%) monoallelic cases, while additional causal variants were identified in 54/136 (39.7%) of probands carrying two variants. Seven novel DIVs and six novel non-canonical splice site variants were detected in a total of 35 alleles and characterized, including the c.6283-321C>G variant leading to a complex splicing defect. Additionally, four novel CNVs were identified and characterized in five alleles. These results confirm that smMIPs-based sequencing of the complete ABCA4 gene provides a cost-effective method to genetically solve retinopathy cases and that several rare structural and splice altering defects remain undiscovered in STGD1 cases.
  • Coventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D. and 25 moreCoventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D., Pizzuto, G., Serhan, B., Apse, L., Hesse, F., Hoang, L., Hoang, P., Igari, Y., Kapiley, K., Haupt-Khutsishvili, T., Kolding, S., Priiki, K., Mačiukaitytė, I., Mohite, V., Nahkola, T., Tsoi, S. Y., Williams, S., Yasuda, S., Cangelosi, A., Duñabeitia, J. A., Mishra, R. K., Rocca, R., Šķilters, J., Wallentin, M., Žilinskaitė-Šinkūnienė, E., & Incel, O. D. (2023). Spatial communication systems across languages reflect universal action constraints. Nature Human Behaviour, 77, 2099-2110. doi:10.1038/s41562-023-01697-4.

    Abstract

    The extent to which languages share properties reflecting the non-linguistic constraints of the speakers who speak them is key to the debate regarding the relationship between language and cognition. A critical case is spatial communication, where it has been argued that semantic universals should exist, if anywhere. Here, using an experimental paradigm able to separate variation within a language from variation between languages, we tested the use of spatial demonstratives—the most fundamental and frequent spatial terms across languages. In n = 874 speakers across 29 languages, we show that speakers of all tested languages use spatial demonstratives as a function of being able to reach or act on an object being referred to. In some languages, the position of the addressee is also relevant in selecting between demonstrative forms. Commonalities and differences across languages in spatial communication can be understood in terms of universal constraints on action shaping spatial language and cognition.
  • Cox, C., Bergmann, C., Fowler, E., Keren-Portnoy, T., Roepstorff, A., Bryant, G., & Fusaroli, R. (2023). A systematic review and Bayesian meta-analysis of the acoustic features of infant-directed speech. Nature Human Behaviour, 7, 114-133. doi:10.1038/s41562-022-01452-1.

    Abstract

    When speaking to infants, adults often produce speech that differs systematically from that directed to other adults. In order to quantify the acoustic properties of this speech style across a wide variety of languages and cultures, we extracted results from empirical studies on the acoustic features of infant-directed speech (IDS). We analyzed data from 88 unique studies (734 effect sizes) on the following five acoustic parameters that have been systematically examined in the literature: i) fundamental frequency (fo), ii) fo variability, iii) vowel space area, iv) articulation rate, and v) vowel duration. Moderator analyses were conducted in hierarchical Bayesian robust regression models in order to examine how these features change with infant age and differ across languages, experimental tasks and recording environments. The moderator analyses indicated that fo, articulation rate, and vowel duration became more similar to adult-directed speech (ADS) over time, whereas fo variability and vowel space area exhibited stability throughout development. These results point the way for future research to disentangle different accounts of the functions and learnability of IDS by conducting theory-driven comparisons among different languages and using computational models to formulate testable predictions.

    Additional information

    supplementary information
  • Creemers, A. (2023). Morphological processing in spoken-word recognition. In D. Crepaldi (Ed.), Linguistic morphology in the mind and brain (pp. 50-64). New York: Routledge.

    Abstract

    Most psycholinguistic studies on morphological processing have examined the role of morphological structure in the visual modality. This chapter discusses morphological processing in the auditory modality, which is an area of research that has only recently received more attention. It first discusses why results in the visual modality cannot straightforwardly be applied to the processing of spoken words, stressing the importance of acknowledging potential modality effects. It then gives a brief overview of the existing research on the role of morphology in the auditory modality, for which an increasing number of studies report that listeners show sensitivity to morphological structure. Finally, the chapter highlights insights gained by looking at morphological processing not only in reading, but also in listening, and it discusses directions for future research
  • Dalla Bella, S., Janaqi, S., Benoit, C.-E., Farrugia, N., Bégel, V., Verga, L., Harding, E. E., & Kotz, S. A. (2024). Unravelling individual rhythmic abilities using machine learning. Scientific Reports, 14(1): 1135. doi:10.1038/s41598-024-51257-7.

    Abstract

    Humans can easily extract the rhythm of a complex sound, like music, and move to its regular beat, like in dance. These abilities are modulated by musical training and vary significantly in untrained individuals. The causes of this variability are multidimensional and typically hard to grasp in single tasks. To date we lack a comprehensive model capturing the rhythmic fingerprints of both musicians and non-musicians. Here we harnessed machine learning to extract a parsimonious model of rhythmic abilities, based on behavioral testing (with perceptual and motor tasks) of individuals with and without formal musical training (n = 79). We demonstrate that variability in rhythmic abilities and their link with formal and informal music experience can be successfully captured by profiles including a minimal set of behavioral measures. These findings highlight that machine learning techniques can be employed successfully to distill profiles of rhythmic abilities, and ultimately shed light on individual variability and its relationship with both formal musical training and informal musical experiences.

    Additional information

    supplementary materials
  • Defina, R., Allen, S. E. M., Davidson, L., Hellwig, B., Kelly, B. F., & Kidd, E. (2023). Sketch Acquisition Manual (SAM), Part I: The sketch corpus. Language Documentation and Conservation Special Publication, 28, 5-38. Retrieved from https://hdl.handle.net/10125/74719.

    Abstract

    This paper presents the first part of a guide for documenting and describing child language, child-directed language and socialization patterns in diverse languages and cultures. The guide is intended for anyone interested in working across child language and language documentation,
    including, for example, field linguists and language documenters, community language workers, child language researchers or graduate students. We assume some basic familiarity with language documentation principles and methods, and, based on this, provide step-by-step suggestions for
    collecting, analyzing and presenting child data. This first part of the guide focuses on constructing a sketch corpus that consists of minimally five hours of annotated and archived data and which documents communicative practices of children between the ages of 2 and 4.
  • Defina, R., Allen, S. E. M., Davidson, L., Hellwig, B., Kelly, B. F., & Kidd, E. (2023). Sketch Acquisition Manual (SAM), Part II: The acquisition sketch. Language Documentation and Conservation Special Publication, 28, 39-86. Retrieved from https://hdl.handle.net/10125/74720.

    Abstract

    This paper presents the second part of a guide for documenting and describing child language, child-directed language and socialization patterns in diverse languages and cultures. The guide is intended for anyone interested in working across child language and language documentation,
    including, for example, field linguists and language documenters, community language workers, child language researchers or graduate students. We assume some basic familiarity with language documentation principles and methods, and, based on this, provide step-by-step suggestions for
    collecting, analyzing and presenting child data. This second part of the guide focuses on developing a child language acquisition sketch. It takes the sketch corpus as its basis (which was introduced in the first part of this guide), and presents a model for analyzing and describing the corpus data.
  • Dideriksen, C., Christiansen, M. H., Tylén, K., Dingemanse, M., & Fusaroli, R. (2023). Quantifying the interplay of conversational devices in building mutual understanding. Journal of Experimental Psychology: General, 152(3), 864-889. doi:10.1037/xge0001301.

    Abstract

    Humans readily engage in idle chat and heated discussions and negotiate tough joint decisions without ever having to think twice about how to keep the conversation grounded in mutual understanding. However, current attempts at identifying and assessing the conversational devices that make this possible are fragmented across disciplines and investigate single devices within single contexts. We present a comprehensive conceptual framework to investigate conversational devices, their relations, and how they adjust to contextual demands. In two corpus studies, we systematically test the role of three conversational devices: backchannels, repair, and linguistic entrainment. Contrasting affiliative and task-oriented conversations within participants, we find that conversational devices adaptively adjust to the increased need for precision in the latter: We show that low-precision devices such as backchannels are more frequent in affiliative conversations, whereas more costly but higher-precision mechanisms, such as specific repairs, are more frequent in task-oriented conversations. Further, task-oriented conversations involve higher complementarity of contributions in terms of the content and perspective: lower semantic entrainment and less frequent (but richer) lexical and syntactic entrainment. Finally, we show that the observed variations in the use of conversational devices are potentially adaptive: pairs of interlocutors that show stronger linguistic complementarity perform better across the two tasks. By combining motivated comparisons of several conversational contexts and theoretically informed computational analyses of empirical data the present work lays the foundations for a comprehensive conceptual framework for understanding the use of conversational devices in dialogue.
  • Dideriksen, C., Christiansen, M. H., Dingemanse, M., Højmark‐Bertelsen, M., Johansson, C., Tylén, K., & Fusaroli, R. (2023). Language‐specific constraints on conversation: Evidence from Danish and Norwegian. Cognitive Science, 47(11): e13387. doi:10.1111/cogs.13387.

    Abstract

    Establishing and maintaining mutual understanding in everyday conversations is crucial. To do so, people employ a variety of conversational devices, such as backchannels, repair, and linguistic entrainment. Here, we explore whether the use of conversational devices might be influenced by cross-linguistic differences in the speakers’ native language, comparing two matched languages—Danish and Norwegian—differing primarily in their sound structure, with Danish being more opaque, that is, less acoustically distinguished. Across systematically manipulated conversational contexts, we find that processes supporting mutual understanding in conversations vary with external constraints: across different contexts and, crucially, across languages. In accord with our predictions, linguistic entrainment was overall higher in Danish than in Norwegian, while backchannels and repairs presented a more nuanced pattern. These findings are compatible with the hypothesis that native speakers of Danish may compensate for its opaque sound structure by adopting a top-down strategy of building more conversational redundancy through entrainment, which also might reduce the need for repairs. These results suggest that linguistic differences might be met by systematic changes in language processing and use. This paves the way for further cross-linguistic investigations and critical assessment of the interplay between cultural and linguistic factors on the one hand and conversational dynamics on the other.
  • Dikshit, A. P., Mishra, C., Das, D., & Parashar, S. (2023). Frequency and temperature-dependence ZnO based fractional order capacitor using machine learning. Materials Chemistry and Physics, 307: 128097. doi:10.1016/j.matchemphys.2023.128097.

    Abstract

    This paper investigates the fractional order behavior of ZnO ceramics at different frequencies. ZnO ceramic was prepared by high energy ball milling technique (HEBM) sintered at 1300℃ to study the frequency response properties. The frequency response properties (impedance and phase
    angles) were examined by analyzing through impedance analyzer (100 Hz - 1 MHz). Constant phase angles (84°-88°) were obtained at low temperature ranges (25 ℃-125 ℃). The structural and
    morphological composition of the ZnO ceramic was investigated using X-ray diffraction techniques and FESEM. Raman spectrum was studied to understand the different modes of ZnO ceramics. Machine learning (polynomial regression) models were trained on a dataset of 1280
    experimental values to accurately predict the relationship between frequency and temperature with respect to impedance and phase values of the ZnO ceramic FOC. The predicted impedance values were found to be in good agreement (R2 ~ 0.98, MSE ~ 0.0711) with the experimental results.
    Impedance values were also predicted beyond the experimental frequency range (at 50 Hz and 2 MHz) for different temperatures (25℃ - 500℃) and for low temperatures (10°, 15° and 20℃)
    within the frequency range (100Hz - 1MHz).

    Files private

    Request files
  • Dikshit, A. P., Das, D., Samal, R. R., Parashar, K., Mishra, C., & Parashar, S. (2024). Optimization of (Ba1-xCax)(Ti0.9Sn0.1)O3 ceramics in X-band using Machine Learning. Journal of Alloys and Compounds, 982: 173797. doi:10.1016/j.jallcom.2024.173797.

    Abstract

    Developing efficient electromagnetic interference shielding materials has become significantly important in present times. This paper reports a series of (Ba1-xCax)(Ti0.9Sn0.1)O3 (BCTS) ((x =0, 0.01, 0.05, & 0.1)ceramics synthesized by conventional method which were studied for electromagnetic interference shielding (EMI) applications in X-band (8-12.4 GHz). EMI shielding properties and all S parameters (S11 & S12) of BCTS ceramic pellets were measured in the frequency range (8-12.4 GHz) using a Vector Network Analyser (VNA). The BCTS ceramic pellets for x = 0.05 showed maximum total effective shielding of 46 dB indicating good shielding behaviour for high-frequency applications. However, the development of lead-free ceramics with different concentrations usually requires iterative experiments resulting in, longer development cycles and higher costs. To address this, we used a machine learning (ML) strategy to predict the EMI shielding for different concentrations and experimentally verify the concentration predicted to give the best EMI shielding. The ML model predicted BCTS ceramics with concentration (x = 0.06, 0.07, 0.08, and 0.09) to have higher shielding values. On experimental verification, a shielding value of 58 dB was obtained for x = 0.08, which was significantly higher than what was obtained experimentally before applying the ML approach. Our results show the potential of using ML in accelerating the process of optimal material development, reducing the need for repeated experimental measures significantly.
  • Dingemans, A. J. M., Hinne, M., Truijen, K. M. G., Goltstein, L., Van Reeuwijk, J., De Leeuw, N., Schuurs-Hoeijmakers, J., Pfundt, R., Diets, I. J., Den Hoed, J., De Boer, E., Coenen-Van der Spek, J., Jansen, S., Van Bon, B. W., Jonis, N., Ockeloen, C. W., Vulto-van Silfhout, A. T., Kleefstra, T., Koolen, D. A., Campeau, P. M. and 13 moreDingemans, A. J. M., Hinne, M., Truijen, K. M. G., Goltstein, L., Van Reeuwijk, J., De Leeuw, N., Schuurs-Hoeijmakers, J., Pfundt, R., Diets, I. J., Den Hoed, J., De Boer, E., Coenen-Van der Spek, J., Jansen, S., Van Bon, B. W., Jonis, N., Ockeloen, C. W., Vulto-van Silfhout, A. T., Kleefstra, T., Koolen, D. A., Campeau, P. M., Palmer, E. E., Van Esch, H., Lyon, G. J., Alkuraya, F. S., Rauch, A., Marom, R., Baralle, D., Van der Sluijs, P. J., Santen, G. W. E., Kooy, R. F., Van Gerven, M. A. J., Vissers, L. E. L. M., & De Vries, B. B. A. (2023). PhenoScore quantifies phenotypic variation for rare genetic diseases by combining facial analysis with other clinical features using a machine-learning framework. Nature Genetics, 55, 1598-1607. doi:10.1038/s41588-023-01469-w.

    Abstract

    Several molecular and phenotypic algorithms exist that establish genotype–phenotype correlations, including facial recognition tools. However, no unified framework that investigates both facial data and other phenotypic data directly from individuals exists. We developed PhenoScore: an open-source, artificial intelligence-based phenomics framework, combining facial recognition technology with Human Phenotype Ontology data analysis to quantify phenotypic similarity. Here we show PhenoScore’s ability to recognize distinct phenotypic entities by establishing recognizable phenotypes for 37 of 40 investigated syndromes against clinical features observed in individuals with other neurodevelopmental disorders and show it is an improvement on existing approaches. PhenoScore provides predictions for individuals with variants of unknown significance and enables sophisticated genotype–phenotype studies by testing hypotheses on possible phenotypic (sub)groups. PhenoScore confirmed previously known phenotypic subgroups caused by variants in the same gene for SATB1, SETBP1 and DEAF1 and provides objective clinical evidence for two distinct ADNP-related phenotypes, already established functionally.

    Additional information

    supplementary information
  • Dingemanse, M., Liesenfeld, A., Rasenberg, M., Albert, S., Ameka, F. K., Birhane, A., Bolis, D., Cassell, J., Clift, R., Cuffari, E., De Jaegher, H., Dutilh Novaes, C., Enfield, N. J., Fusaroli, R., Gregoromichelaki, E., Hutchins, E., Konvalinka, I., Milton, D., Rączaszek-Leonardi, J., Reddy, V. and 8 moreDingemanse, M., Liesenfeld, A., Rasenberg, M., Albert, S., Ameka, F. K., Birhane, A., Bolis, D., Cassell, J., Clift, R., Cuffari, E., De Jaegher, H., Dutilh Novaes, C., Enfield, N. J., Fusaroli, R., Gregoromichelaki, E., Hutchins, E., Konvalinka, I., Milton, D., Rączaszek-Leonardi, J., Reddy, V., Rossano, F., Schlangen, D., Seibt, J., Stokoe, E., Suchman, L. A., Vesper, C., Wheatley, T., & Wiltschko, M. (2023). Beyond single-mindedness: A figure-ground reversal for the cognitive sciences. Cognitive Science, 47(1): e13230. doi:10.1111/cogs.13230.

    Abstract

    A fundamental fact about human minds is that they are never truly alone: all minds are steeped in situated interaction. That social interaction matters is recognised by any experimentalist who seeks to exclude its influence by studying individuals in isolation. On this view, interaction complicates cognition. Here we explore the more radical stance that interaction co-constitutes cognition: that we benefit from looking beyond single minds towards cognition as a process involving interacting minds. All around the cognitive sciences, there are approaches that put interaction centre stage. Their diverse and pluralistic origins may obscure the fact that collectively, they harbour insights and methods that can respecify foundational assumptions and fuel novel interdisciplinary work. What might the cognitive sciences gain from stronger interactional foundations? This represents, we believe, one of the key questions for the future. Writing as a multidisciplinary collective assembled from across the classic cognitive science hexagon and beyond, we highlight the opportunity for a figure-ground reversal that puts interaction at the heart of cognition. The interactive stance is a way of seeing that deserves to be a key part of the conceptual toolkit of cognitive scientists.
  • Dingemanse, M. (2023). Ideophones. In E. Van Lier (Ed.), The Oxford handbook of word classes (pp. 466-476). Oxford: Oxford University Press.

    Abstract

    Many of the world’s languages feature an open lexical class of ideophones, words whose marked forms and sensory meanings invite iconic associations. Ideophones (also known as mimetics or expressives) are well-known from languages in Asia, Africa and the Americas, where they often form a class on the same order of magnitude as other major word classes and take up a considerable functional load as modifying expressions or predicates. Across languages, commonalities in the morphosyntactic behaviour of ideophones can be related to their nature and origin as vocal depictions. At the same time there is ample room for linguistic diversity, raising the need for fine-grained grammatical description of ideophone systems. As vocal depictions, ideophones often form a distinct lexical stratum seemingly conjured out of thin air; but as conventionalized words, they inevitably grow roots in local linguistic systems, showing relations to adverbs, adjectives, verbs and other linguistic resources devoted to modification and predication
  • Dingemanse, M. (2023). Interjections. In E. Van Lier (Ed.), The Oxford handbook of word classes (pp. 477-491). Oxford: Oxford University Press.

    Abstract

    No class of words has better claims to universality than interjections. At the same time, no category has more variable content than this one, traditionally the catch-all basket for linguistic items that bear a complicated relation to sentential syntax. Interjections are a mirror reflecting methodological and theoretical assumptions more than a coherent linguistic category that affords unitary treatment. This chapter focuses on linguistic items that typically function as free-standing utterances, and on some of the conceptual, methodological, and theoretical questions generated by such items. A key move is to study these items in the setting of conversational sequences, rather than from the “flatland” of sequential syntax. This makes visible how some of the most frequent interjections streamline everyday language use and scaffold complex language. Approaching interjections in terms of their sequential positions and interactional functions has the potential to reveal and explain patterns of universality and diversity in interjections.
  • Dingemanse, M., & Enfield, N. J. (2024). Interactive repair and the foundations of language. Trends in Cognitive Sciences, 28(1), 30-42. doi:10.1016/j.tics.2023.09.003.

    Abstract

    The robustness and flexibility of human language is underpinned by a machinery of interactive repair. Repair is deeply intertwined with two core properties of human language: reflexivity (it can communicate about itself) and accountability (it is used to publicly enforce social norms). We review empirical and theoretical advances from across the cognitive sciences that mark interactive repair as a domain of pragmatic universals, a key place to study metacognition in interaction, and a system that enables collective computation. This provides novel insights on the role of repair in comparative cognition, language development and human-computer interaction. As an always-available fallback option and an infrastructure for negotiating social commitments, interactive repair is foundational to the resilience, complexity, and flexibility of human language.
  • Dingemanse, M. (2024). Interjections at the heart of language. Annual Review of Linguistics, 10, 257-277. doi:10.1146/annurev-linguistics-031422-124743.
  • Doerig, A., Sommers, R. P., Seeliger, K., Richards, B., Ismael, J., Lindsay, G. W., Kording, K. P., Konkle, T., Van Gerven, M. A. J., Kriegeskorte, N., & Kietzmann, T. C. (2023). The neuroconnectionist research programme. Nature Reviews Neuroscience, 24, 431-450. doi:10.1038/s41583-023-00705-w.

    Abstract

    Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call ‘neuroconnectionism’. ANNs have been not only lauded as the current best models of information processing in the brain but also criticized for failing to account for basic cognitive functions. In this Perspective article, we propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism for brain science. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of a scientific research programme is often not directly falsifiable but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a general research programme centred around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses and deriving novel understanding. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.
  • Donnelly, S., Rowland, C. F., Chang, F., & Kidd, E. (2024). A comprehensive examination of prediction‐based error as a mechanism for syntactic development: Evidence from syntactic priming. Cognitive Science, 48(4): e13431. doi:10.1111/cogs.13431.

    Abstract

    Prediction-based accounts of language acquisition have the potential to explain several different effects in child language acquisition and adult language processing. However, evidence regarding the developmental predictions of such accounts is mixed. Here, we consider several predictions of these accounts in two large-scale developmental studies of syntactic priming of the English dative alternation. Study 1 was a cross-sectional study (N = 140) of children aged 3−9 years, in which we found strong evidence of abstract priming and the lexical boost, but little evidence that either effect was moderated by age. We found weak evidence for a prime surprisal effect; however, exploratory analyses revealed a protracted developmental trajectory for verb-structure biases, providing an explanation as for why prime surprisal effects are more elusive in developmental populations. In a longitudinal study (N = 102) of children in tightly controlled age bands at 42, 48, and 54 months, we found priming effects emerged on trials with verb overlap early but did not observe clear evidence of priming on trials without verb overlap until 54 months. There was no evidence of a prime surprisal effect at any time point and none of the effects were moderated by age. The results relating to the emergence of the abstract priming and lexical boost effects are consistent with prediction-based models, while the absence of age-related effects appears to reflect the structure-specific challenges the dative presents to English-acquiring children. Overall, our complex pattern of findings demonstrates the value of developmental data sets in testing psycholinguistic theory.

    Additional information

    table S1 and S2 appendix A, B, C and D
  • D’Onofrio, G., Accogli, A., Severino, M., Caliskan, H., Kokotović, T., Blazekovic, A., Jercic, K. G., Markovic, S., Zigman, T., Goran, K., Barišić, N., Duranovic, V., Ban, A., Borovecki, F., Ramadža, D. P., Barić, I., Fazeli, W., Herkenrath, P., Marini, C., Vittorini, R. and 30 moreD’Onofrio, G., Accogli, A., Severino, M., Caliskan, H., Kokotović, T., Blazekovic, A., Jercic, K. G., Markovic, S., Zigman, T., Goran, K., Barišić, N., Duranovic, V., Ban, A., Borovecki, F., Ramadža, D. P., Barić, I., Fazeli, W., Herkenrath, P., Marini, C., Vittorini, R., Gowda, V., Bouman, A., Rocca, C., Alkhawaja, I. A., Murtaza, B. N., Rehman, M. M. U., Al Alam, C., Nader, G., Mancardi, M. M., Giacomini, T., Srivastava, S., Alvi, J. R., Tomoum, H., Matricardi, S., Iacomino, M., Riva, A., Scala, M., Madia, F., Pistorio, A., Salpietro, V., Minetti, C., Rivière, J.-B., Srour, M., Efthymiou, S., Maroofian, R., Houlden, H., Vernes, S. C., Zara, F., Striano, P., & Nagy, V. (2023). Genotype–phenotype correlation in contactin-associated protein-like 2 (CNTNAP-2) developmental disorder. Human Genetics, 142, 909-925. doi:10.1007/s00439-023-02552-2.

    Abstract

    Contactin-associated protein-like 2 (CNTNAP2) gene encodes for CASPR2, a presynaptic type 1 transmembrane protein, involved in cell–cell adhesion and synaptic interactions. Biallelic CNTNAP2 loss has been associated with “Pitt-Hopkins-like syndrome-1” (MIM#610042), while the pathogenic role of heterozygous variants remains controversial. We report 22 novel patients harboring mono- (n = 2) and bi-allelic (n = 20) CNTNAP2 variants and carried out a literature review to characterize the genotype–phenotype correlation. Patients (M:F 14:8) were aged between 3 and 19 years and affected by global developmental delay (GDD) (n = 21), moderate to profound intellectual disability (n = 17) and epilepsy (n = 21). Seizures mainly started in the first two years of life (median 22.5 months). Antiseizure medications were successful in controlling the seizures in about two-thirds of the patients. Autism spectrum disorder (ASD) and/or other neuropsychiatric comorbidities were present in nine patients (40.9%). Nonspecific midline brain anomalies were noted in most patients while focal signal abnormalities in the temporal lobes were noted in three subjects. Genotype–phenotype correlation was performed by also including 50 previously published patients (15 mono- and 35 bi-allelic variants). Overall, GDD (p < 0.0001), epilepsy (p < 0.0001), hyporeflexia (p = 0.012), ASD (p = 0.009), language impairment (p = 0.020) and severe cognitive impairment (p = 0.031) were significantly associated with the presence of biallelic versus monoallelic variants. We have defined the main features associated with biallelic CNTNAP2 variants, as severe cognitive impairment, epilepsy and behavioral abnormalities. We propose CASPR2-deficiency neurodevelopmental disorder as an exclusively recessive disease while the contribution of heterozygous variants is less likely to follow an autosomal dominant inheritance pattern.

    Additional information

    supplementary tables
  • Drijvers, L., & Holler, J. (2023). The multimodal facilitation effect in human communication. Psychonomic Bulletin & Review, 30(2), 792-801. doi:10.3758/s13423-022-02178-x.

    Abstract

    During face-to-face communication, recipients need to rapidly integrate a plethora of auditory and visual signals. This integration of signals from many different bodily articulators, all offset in time, with the information in the speech stream may either tax the cognitive system, thus slowing down language processing, or may result in multimodal facilitation. Using the classical shadowing paradigm, participants shadowed speech from face-to-face, naturalistic dyadic conversations in an audiovisual context, an audiovisual context without visual speech (e.g., lips), and an audio-only context. Our results provide evidence of a multimodal facilitation effect in human communication: participants were faster in shadowing words when seeing multimodal messages compared with when hearing only audio. Also, the more visual context was present, the fewer shadowing errors were made, and the earlier in time participants shadowed predicted lexical items. We propose that the multimodal facilitation effect may contribute to the ease of fast face-to-face conversational interaction.
  • Drijvers, L., & Mazzini, S. (2023). Neural oscillations in audiovisual language and communication. In Oxford Research Encyclopedia of Neuroscience. Oxford: Oxford University Press. doi:10.1093/acrefore/9780190264086.013.455.

    Abstract

    How do neural oscillations support human audiovisual language and communication? Considering the rhythmic nature of audiovisual language, in which stimuli from different sensory modalities unfold over time, neural oscillations represent an ideal candidate to investigate how audiovisual language is processed in the brain. Modulations of oscillatory phase and power are thought to support audiovisual language and communication in multiple ways. Neural oscillations synchronize by tracking external rhythmic stimuli or by re-setting their phase to presentation of relevant stimuli, resulting in perceptual benefits. In particular, synchronized neural oscillations have been shown to subserve the processing and the integration of auditory speech, visual speech, and hand gestures. Furthermore, synchronized oscillatory modulations have been studied and reported between brains during social interaction, suggesting that their contribution to audiovisual communication goes beyond the processing of single stimuli and applies to natural, face-to-face communication.

    There are still some outstanding questions that need to be answered to reach a better understanding of the neural processes supporting audiovisual language and communication. In particular, it is not entirely clear yet how the multitude of signals encountered during audiovisual communication are combined into a coherent percept and how this is affected during real-world dyadic interactions. In order to address these outstanding questions, it is fundamental to consider language as a multimodal phenomenon, involving the processing of multiple stimuli unfolding at different rhythms over time, and to study language in its natural context: social interaction. Other outstanding questions could be addressed by implementing novel techniques (such as rapid invisible frequency tagging, dual-electroencephalography, or multi-brain stimulation) and analysis methods (e.g., using temporal response functions) to better understand the relationship between oscillatory dynamics and efficient audiovisual communication.
  • Düngen, D., Fitch, W. T., & Ravignani, A. (2023). Hoover the talking seal [quick guide]. Current Biology, 33, R50-R52. doi:10.1016/j.cub.2022.12.023.
  • Düngen, D., & Ravignani, A. (2023). The paradox of learned song in a semi-solitary mammal. Ethology, 129(9), 445-497. doi:10.1111/eth.13385.

    Abstract

    Learning can occur via trial and error; however, learning from conspecifics is faster and more efficient. Social animals can easily learn from conspecifics, but how do less social species learn? In particular, birds provide astonishing examples of social learning of vocalizations, while vocal learning from conspecifics is much less understood in mammals. We present a hypothesis aimed at solving an apparent paradox: how can harbor seals (Phoca vitulina) learn their song when their whole lives are marked by loose conspecific social contact? Harbor seal pups are raised individually by their mostly silent mothers. Pups' first few weeks of life show developed vocal plasticity; these weeks are followed by relatively silent years until sexually mature individuals start singing. How can this rather solitary life lead to a learned song? Why do pups display vocal plasticity at a few weeks of age, when this is apparently not needed? Our hypothesis addresses these questions and tries to explain how vocal learning fits into the natural history of harbor seals, and potentially other less social mammals. We suggest that harbor seals learn during a sensitive period within puppyhood, where they are exposed to adult males singing. In particular, we hypothesize that, to make this learning possible, the following happens concurrently: (1) mothers give birth right before male singing starts, (2) pups enter a sensitive learning phase around weaning time, which (3) coincides with their foraging expeditions at sea which, (4) in turn, coincide with the peak singing activity of adult males. In other words, harbor seals show vocal learning as pups so they can acquire elements of their future song from adults, and solitary adults can sing because they have acquired these elements as pups. We review the available evidence and suggest that pups learn adult vocalizations because they are born exactly at the right time to eavesdrop on singing adults. We conclude by advancing empirical predictions and testable hypotheses for future work.
  • Düngen, D., Sarfati, M., & Ravignani, A. (2023). Cross-species research in biomusicality: Methods, pitfalls, and prospects. In E. H. Margulis, P. Loui, & D. Loughridge (Eds.), The science-music borderlands: Reckoning with the past and imagining the future (pp. 57-95). Cambridge, MA, USA: The MIT Press. doi:10.7551/mitpress/14186.003.0008.
  • Eekhof, L. S., Van Krieken, K., Sanders, J., & Willems, R. M. (2023). Engagement with narrative characters: The role of social-cognitive abilities and linguistic viewpoint. Discourse Processes, 60(6), 411-439. doi:10.1080/0163853X.2023.2206773.

    Abstract

    This article explores the role of text and reader characteristics in character engagement experiences. In an online study, participants completed several self-report and behavioral measures of social-cognitive abilities and read two literary narratives in which the presence of linguistic viewpoint markers was varied using a highly controlled manipulation strategy. Afterward, participants reported on their character engagement experiences. A principal component analysis on participants’ responses revealed the multidimensional nature of character engagement, which included both self- and other-oriented emotional responses (e.g., empathy, personal distress) as well as more cognitive responses (e.g., identification, perspective taking). Furthermore, character engagement was found to rely on a wide range of social-cognitive abilities but not on the presence of viewpoint markers. Finally, and most importantly, we did not find convincing evidence for an interplay between social-cognitive abilities and the presence of viewpoint markers. These findings suggest that readers rely on their social-cognitive abilities to engage with the inner worlds of fictional others, more so than on the lexical cues of those inner worlds provided by the text.
  • Eekhof, L. S. (2024). Reading the mind: The relationship between social cognition and narrative processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Eekhof, L. S., & Mar, R. A. (2024). Does reading about fictional minds make us more curious about real ones? Language and Cognition, 16(1), 176-196. doi:10.1017/langcog.2023.30.

    Abstract

    Although there is a large body of research assessing whether exposure to narratives boosts social cognition immediately afterward, not much research has investigated the underlying mechanism of this putative effect. This experiment investigates the possibility that reading a narrative increases social curiosity directly afterward, which might explain the short-term boosts in social cognition reported by some others. We developed a novel measure of state social curiosity and collected data from participants (N = 222) who were randomly assigned to read an excerpt of narrative fiction or expository nonfiction. Contrary to our expectations, we found that those who read a narrative exhibited less social curiosity afterward than those who read an expository text. This result was not moderated by trait social curiosity. An exploratory analysis uncovered that the degree to which texts present readers with social targets predicted less social curiosity. Our experiment demonstrates that reading narratives, or possibly texts with social content in general, may engage and fatigue social-cognitive abilities, causing a temporary decrease in social curiosity. Such texts might also temporarily satisfy the need for social connection, temporarily reducing social curiosity. Both accounts are in line with theories describing how narratives result in better social cognition over the long term.
  • Egger, J. (2023). Need for speed? The role of speed of processing in early lexical development. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Eijk, L. (2023). Linguistic alignment: The syntactic, prosodic, and segmental phonetic levels. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Eising, E., Vino, A., Mabie, H. L., Campbell, T. F., Shriberg, L. D., & Fisher, S. E. (2024). Genome sequencing of idiopathic speech delay. Human Mutation, 2024: 9692863. doi:10.1155/2024/9692863.

    Abstract

    Genetic investigations of people with speech and language disorders can provide windows into key aspects of human biology. Most genomic research into impaired speech development has so far focused on childhood apraxia of speech (CAS), a rare neurodevelopmental disorder characterized by difficulties with coordinating rapid fine motor sequences that underlie proficient speech. In 2001, pathogenic variants of FOXP2 provided the first molecular genetic accounts of CAS aetiology. Since then, disruptions in several other genes have been implicated in CAS, with a substantial proportion of cases being explained by high-penetrance variants. However, the genetic architecture underlying other speech-related disorders remains less well understood. Thus, in the present study, we used systematic DNA sequencing methods to investigate idiopathic speech delay, as characterized by delayed speech development in the absence of a motor speech diagnosis (such as CAS), a language/reading disorder, or intellectual disability. We performed genome sequencing in a cohort of 23 children with a rigorous diagnosis of idiopathic speech delay. For roughly half of the sample (ten probands), sufficient DNA was also available for genome sequencing in both parents, allowing discovery of de novo variants. In the thirteen singleton probands, we focused on identifying loss-of-function and likely damaging missense variants in genes intolerant to such mutations. We found that one speech delay proband carried a pathogenic frameshift deletion in SETD1A, a gene previously implicated in a broader variable monogenic syndrome characterized by global developmental problems including delayed speech and/or language development, mild intellectual disability, facial dysmorphisms, and behavioural and psychiatric symptoms. Of note, pathogenic SETD1A variants have been independently reported in children with CAS in two separate studies. In other probands in our speech delay cohort, likely pathogenic missense variants were identified affecting highly conserved amino acids in key functional domains of SPTBN1 and ARF3. Overall, this study expands the phenotype spectrum associated with pathogenic SETD1A variants, to also include idiopathic speech delay without CAS or intellectual disability, and suggests additional novel potential candidate genes that may harbour high-penetrance variants that can disrupt speech development.

    Additional information

    supplemental table
  • Ekerdt, C., Takashima, A., & McQueen, J. M. (2023). Memory consolidation in second language neurocognition. In K. Morgan-Short, & J. G. Van Hell (Eds.), The Routledge handbook of second language acquisition and neurolinguistics. Oxfordshire: Routledge.

    Abstract

    Acquiring a second language (L2) requires newly learned information to be integrated with existing knowledge. It has been proposed that several memory systems work together to enable this process of rapidly encoding new information and then slowly incorporating it with existing knowledge, such that it is consolidated and integrated into the language network without catastrophic interference. This chapter focuses on consolidation of L2 vocabulary. First, the complementary learning systems model is outlined, along with the model’s predictions regarding lexical consolidation. Next, word learning studies in first language (L1) that investigate the factors playing a role in consolidation, and the neural mechanisms underlying this, are reviewed. Using the L1 memory consolidation literature as background, the chapter then presents what is currently known about memory consolidation in L2 word learning. Finally, considering what is already known about L1 but not about L2, future research investigating memory consolidation in L2 neurocognition is proposed.
  • Emmendorfer, A. K., Bonte, M., Jansma, B. M., & Kotz, S. A. (2023). Sensitivity to syllable stress regularities in externally but not self‐triggered speech in Dutch. European Journal of Neuroscience, 58(1), 2297-2314. doi:10.1111/ejn.16003.

    Abstract

    Several theories of predictive processing propose reduced sensory and neural responses to anticipated events. Support comes from magnetoencephalography/electroencephalography (M/EEG) studies, showing reduced auditory N1 and P2 responses to self-generated compared to externally generated events, or when the timing and form of stimuli are more predictable. The current study examined the sensitivity of N1 and P2 responses to statistical speech regularities. We employed a motor-to-auditory paradigm comparing event-related potential (ERP) responses to externally and self-triggered pseudowords. Participants were presented with a cue indicating which button to press (motor-auditory condition) or which pseudoword would be presented (auditory-only condition). Stimuli consisted of the participant's own voice uttering pseudowords that varied in phonotactic probability and syllable stress. We expected to see N1 and P2 suppression for self-triggered stimuli, with greater suppression effects for more predictable features such as high phonotactic probability and first-syllable stress in pseudowords. In a temporal principal component analysis (PCA), we observed an interaction between syllable stress and condition for the N1, where second-syllable stress items elicited a larger N1 than first-syllable stress items, but only for externally generated stimuli. We further observed an effect of syllable stress on the P2, where first-syllable stress items elicited a larger P2. Strikingly, we did not observe motor-induced suppression for self-triggered stimuli for either the N1 or P2 component, likely due to the temporal predictability of the stimulus onset in both conditions. Taking into account previous findings, the current results suggest that sensitivity to syllable stress regularities depends on task demands.

    Additional information

    Supporting Information
  • Engelen, M. M., Franken, M.-C.-J.-P., Stipdonk, L. W., Horton, S. E., Jackson, V. E., Reilly, S., Morgan, A. T., Fisher, S. E., Van Dulmen, S., & Eising, E. (2024). The association between stuttering burden and psychosocial aspects of life in adults. Journal of Speech, Language, and Hearing Research. Advance online publication. doi:10.1044/2024_JSLHR-23-00562.

    Abstract

    Purpose:
    Stuttering is a speech condition that can have a major impact on a person's quality of life. This descriptive study aimed to identify subgroups of people who stutter (PWS) based on stuttering burden and to investigate differences between these subgroups on psychosocial aspects of life.

    Method:
    The study included 618 adult participants who stutter. They completed a detailed survey examining stuttering symptomatology, impact of stuttering on anxiety, education and employment, experience of stuttering, and levels of depression, anxiety, and stress. A two-step cluster analytic procedure was performed to identify subgroups of PWS, based on self-report of stuttering frequency, severity, affect, and anxiety, four measures that together inform about stuttering burden.

    Results:
    We identified a high- (n = 230) and a low-burden subgroup (n = 372). The high-burden subgroup reported a significantly higher impact of stuttering on education and employment, and higher levels of general depression, anxiety, stress, and overall impact of stuttering. These participants also reported that they trialed more different stuttering therapies than those with lower burden.

    Conclusions:
    Our results emphasize the need to be attentive to the diverse experiences and needs of PWS, rather than treating them as a homogeneous group. Our findings also stress the importance of personalized therapeutic strategies for individuals with stuttering, considering all aspects that could influence their stuttering burden. People with high-burden stuttering might, for example, have a higher need for psychological therapy to reduce stuttering-related anxiety. People with less emotional reactions but severe speech distortions may also have a moderate to high burden, but they may have a higher need for speech techniques to communicate with more ease. Future research should give more insights into the therapeutic needs of people highly burdened by their stuttering.
  • Ge, R., Yu, Y., Qi, Y. X., Fan, Y.-n., Chen, S., Gao, C., Haas, S. S., New, F., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Buckner, R., Caseras, X., Crivello, F., Crone, E. A., Erk, S., Fisher, S. E., Franke, B., Glahn, D. C., Dannlowski, U. Ge, R., Yu, Y., Qi, Y. X., Fan, Y.-n., Chen, S., Gao, C., Haas, S. S., New, F., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Buckner, R., Caseras, X., Crivello, F., Crone, E. A., Erk, S., Fisher, S. E., Franke, B., Glahn, D. C., Dannlowski, U., Grotegerd, D., Gruber, O., Hulshoff Pol, H. E., Schumann, G., Tamnes, C. K., Walter, H., Wierenga, L. M., Jahanshad, N., Thompson, P. M., Frangou, S., & ENIGMA Lifespan Working Group (2024). Normative modelling of brain morphometry across the lifespan with CentileBrain: Algorithm benchmarking and model optimisation. The Lancet Digital Health, 6(3), e211-e221. doi:10.1016/S2589-7500(23)00250-9.

    Abstract

    The value of normative models in research and clinical practice relies on their robustness and a systematic comparison of different modelling algorithms and parameters; however, this has not been done to date. We aimed to identify the optimal approach for normative modelling of brain morphometric data through systematic empirical benchmarking, by quantifying the accuracy of different algorithms and identifying parameters that optimised model performance. We developed this framework with regional morphometric data from 37 407 healthy individuals (53% female and 47% male; aged 3–90 years) from 87 datasets from Europe, Australia, the USA, South Africa, and east Asia following a comparative evaluation of eight algorithms and multiple covariate combinations pertaining to image acquisition and quality, parcellation software versions, global neuroimaging measures, and longitudinal stability. The multivariate fractional polynomial regression (MFPR) emerged as the preferred algorithm, optimised with non-linear polynomials for age and linear effects of global measures as covariates. The MFPR models showed excellent accuracy across the lifespan and within distinct age-bins and longitudinal stability over a 2-year period. The performance of all MFPR models plateaued at sample sizes exceeding 3000 study participants. This model can inform about the biological and behavioural implications of deviations from typical age-related neuroanatomical changes and support future study designs. The model and scripts described here are freely available through CentileBrain.
  • Lu, A. T., Fei, Z., Haghani, A., Robeck, T. R., Zoller, J. A., Li, C. Z., Lowe, R., Yan, Q., Zhang, J., Vu, H., Ablaeva, J., Acosta-Rodriguez, V. A., Adams, D. M., Almunia, J., Aloysius, A., Ardehali, R., Arneson, A., Baker, C. S., Banks, G., Belov, K. and 168 moreLu, A. T., Fei, Z., Haghani, A., Robeck, T. R., Zoller, J. A., Li, C. Z., Lowe, R., Yan, Q., Zhang, J., Vu, H., Ablaeva, J., Acosta-Rodriguez, V. A., Adams, D. M., Almunia, J., Aloysius, A., Ardehali, R., Arneson, A., Baker, C. S., Banks, G., Belov, K., Bennett, N. C., Black, P., Blumstein, D. T., Bors, E. K., Breeze, C. E., Brooke, R. T., Brown, J. L., Carter, G. G., Caulton, A., Cavin, J. M., Chakrabarti, L., Chatzistamou, I., Chen, H., Cheng, K., Chiavellini, P., Choi, O. W., Clarke, S. M., Cooper, L. N., Cossette, M. L., Day, J., DeYoung, J., DiRocco, S., Dold, C., Ehmke, E. E., Emmons, C. K., Emmrich, S., Erbay, E., Erlacher-Reid, C., Faulkes, C. G., Ferguson, S. H., Finno, C. J., Flower, J. E., Gaillard, J. M., Garde, E., Gerber, L., Gladyshev, V. N., Gorbunova, V., Goya, R. G., Grant, M. J., Green, C. B., Hales, E. N., Hanson, M. B., Hart, D. W., Haulena, M., Herrick, K., Hogan, A. N., Hogg, C. J., Hore, T. A., Huang, T., Izpisua Belmonte, J. C., Jasinska, A. J., Jones, G., Jourdain, E., Kashpur, O., Katcher, H., Katsumata, E., Kaza, V., Kiaris, H., Kobor, M. S., Kordowitzki, P., Koski, W. R., Krützen, M., Kwon, S. B., Larison, B., Lee, S. G., Lehmann, M., Lemaitre, J. F., Levine, A. J., Li, C., Li, X., Lim, A. R., Lin, D. T. S., Lindemann, D. M., Little, T. J., Macoretta, N., Maddox, D., Matkin, C. O., Mattison, J. A., McClure, M., Mergl, J., Meudt, J. J., Montano, G. A., Mozhui, K., Munshi-South, J., Naderi, A., Nagy, M., Narayan, P., Nathanielsz, P. W., Nguyen, N. B., Niehrs, C., O’Brien, J. K., O’Tierney Ginn, P., Odom, D. T., Ophir, A. G., Osborn, S., Ostrander, E. A., Parsons, K. M., Paul, K. C., Pellegrini, M., Peters, K. J., Pedersen, A. B., Petersen, J. L., Pietersen, D. W., Pinho, G. M., Plassais, J., Poganik, J. R., Prado, N. A., Reddy, P., Rey, B., Ritz, B. R., Robbins, J., Rodriguez, M., Russell, J., Rydkina, E., Sailer, L. L., Salmon, A. B., Sanghavi, A., Schachtschneider, K. M., Schmitt, D., Schmitt, T., Schomacher, L., Schook, L. B., Sears, K. E., Seifert, A. W., Seluanov, A., Shafer, A. B. A., Shanmuganayagam, D., Shindyapina, A. V., Simmons, M., Singh, K., Sinha, I., Slone, J., Snell, R. G., Soltanmaohammadi, E., Spangler, M. L., Spriggs, M. C., Staggs, L., Stedman, N., Steinman, K. J., Stewart, D. T., Sugrue, V. J., Szladovits, B., Takahashi, J. S., Takasugi, M., Teeling, E. C., Thompson, M. J., Van Bonn, B., Vernes, S. C., Villar, D., Vinters, H. V., Wallingford, M. C., Wang, N., Wayne, R. K., Wilkinson, G. S., Williams, C. K., Williams, R. W., Yang, X. W., Yao, M., Young, B. G., Zhang, B., Zhang, Z., Zhao, P., Zhao, Y., Zhou, W., Zimmermann, J., Ernst, J., Raj, K., & Horvath, S. (2023). Universal DNA methylation age across mammalian tissues. Nature aging, 3, 1144-1166. doi:10.1038/s43587-023-00462-6.

    Abstract

    Aging, often considered a result of random cellular damage, can be accurately estimated using DNA methylation profiles, the foundation of pan-tissue epigenetic clocks. Here, we demonstrate the development of universal pan-mammalian clocks, using 11,754 methylation arrays from our Mammalian Methylation Consortium, which encompass 59 tissue types across 185 mammalian species. These predictive models estimate mammalian tissue age with high accuracy (r > 0.96). Age deviations correlate with human mortality risk, mouse somatotropic axis mutations and caloric restriction. We identified specific cytosines with methylation levels that change with age across numerous species. These sites, highly enriched in polycomb repressive complex 2-binding locations, are near genes implicated in mammalian development, cancer, obesity and longevity. Our findings offer new evidence suggesting that aging is evolutionarily conserved and intertwined with developmental processes across all mammals.
  • Ferré, G. (2023). Pragmatic gestures and prosody. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527215.

    Abstract

    The study presented here focuses on two pragmatic gestures:
    the hand flip (Ferré, 2011), a gesture of the Palm Up Open
    Hand/PUOH family (Müller, 2004) and the closed hand which
    can be considered as the opposite kind of movement to the open-
    ing of the hands present in the PUOH gesture. Whereas one of
    the functions of the hand flip has been described as presenting
    a new point in speech (Cienki, 2021), the closed hand gesture
    has not yet been described in the literature to the best of our
    knowledge. It can however be conceived of as having the oppo-
    site function of announcing the end of a point in discourse. The
    object of the present study is therefore to determine, with the
    study of prosodic features, if the two gestures are found in the
    same type of speech units and what their respective scope is.
    Drawing from a corpus of three TED Talks in French the
    prosodic characteristics of the speech that accompanies the two
    gestures will be examined. The hypothesis developed in the
    present paper is that their scope should be reflected in the
    prosody of accompanying speech, especially pitch key, tone,
    and relative pitch range. The prediction is that hand flips and
    closing hand gestures are expected to be located at the periph-
    ery of Intonation Phrases (IPs), Inter-Pausal Units (IPUs) or
    more conversational Turn Constructional Units (TCUs), and are
    likely to be co-occurrent with pauses in speech. But because of
    the natural slope of intonation in speech, the speech that accom-
    pany early gestures in Intonation Phrases should reveal different
    features from the speech at the end of intonational units. Tones
    should be different as well, considering the prosodic structure
    of spoken French.
  • Ferreira, F., & Huettig, F. (2023). Fast and slow language processing: A window into dual-process models of cognition. [Open Peer commentary on De Neys]. Behavioral and Brain Sciences, 46: e121. doi:10.1017/S0140525X22003041.

    Abstract

    Our understanding of dual-process models of cognition may benefit from a consideration of language processing, as language comprehension involves fast and slow processes analogous to those used for reasoning. More specifically, De Neys's criticisms of the exclusivity assumption and the fast-to-slow switch mechanism are consistent with findings from the literature on the construction and revision of linguistic interpretations.
  • Fitz, H., Hagoort, P., & Petersson, K. M. (2024). Neurobiological causal models of language processing. Neurobiology of Language, 5(1), 225-247. doi:10.1162/nol_a_00133.

    Abstract

    The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the “machine language” of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
  • Fiveash, A., Ferreri, L., Bouwer, F. L., Kösem, A., Moghimi, S., Ravignani, A., Keller, P. E., & Tillmann, B. (2023). Can rhythm-mediated reward boost learning, memory, and social connection? Perspectives for future research. Neuroscience and Biobehavioral Reviews, 149: 105153. doi:10.1016/j.neubiorev.2023.105153.

    Abstract

    Studies of rhythm processing and of reward have progressed separately, with little connection between the two. However, consistent links between rhythm and reward are beginning to surface, with research suggesting that synchronization to rhythm is rewarding, and that this rewarding element may in turn also boost this synchronization. The current mini review shows that the combined study of rhythm and reward can be beneficial to better understand their independent and combined roles across two central aspects of cognition: 1) learning and memory, and 2) social connection and interpersonal synchronization; which have so far been studied largely independently. From this basis, it is discussed how connections between rhythm and reward can be applied to learning and memory and social connection across different populations, taking into account individual differences, clinical populations, human development, and animal research. Future research will need to consider the rewarding nature of rhythm, and that rhythm can in turn boost reward, potentially enhancing other cognitive and social processes.
  • He, J., Frances, C., Creemers, A., & Brehm, L. (2024). Effects of irrelevant unintelligible and intelligible background speech on spoken language production. Quarterly Journal of Experimental Psychology. Advance online publication. doi:10.1177/17470218231219971.

    Abstract

    Earlier work has explored spoken word production during irrelevant background speech such as intelligible and unintelligible word lists. The present study compared how different types of irrelevant background speech (word lists vs. sentences) influenced spoken word production relative to a quiet control condition, and whether the influence depended on the intelligibility of the background speech. Experiment 1 presented native Dutch speakers with Chinese word lists and sentences. Experiment 2 presented a similar group with Dutch word lists and sentences. In both experiments, the lexical selection demands in speech production were manipulated by varying name agreement (high vs. low) of the to-be-named pictures. Results showed that background speech, regardless of its intelligibility, disrupted spoken word production relative to a quiet condition, but no effects of word lists versus sentences in either language were found. Moreover, the disruption by intelligible background speech compared with the quiet condition was eliminated when planning low name agreement pictures. These findings suggest that any speech, even unintelligible speech, interferes with production, which implies that the disruption of spoken word production is mainly phonological in nature. The disruption by intelligible background speech can be reduced or eliminated via top–down attentional engagement.
  • Frances, C. (2024). Good enough processing: What have we learned in the 20 years since Ferreira et al. (2002)? Frontiers in Psychology, 15: 1323700. doi:10.3389/fpsyg.2024.1323700.

    Abstract

    Traditionally, language processing has been thought of in terms of complete processing of the input. In contrast to this, Ferreira and colleagues put forth the idea of good enough processing. The proposal was that during everyday processing, ambiguities remain unresolved, we rely on heuristics instead of full analyses, and we carry out deep processing only if we need to for the task at hand. This idea has gathered substantial traction since its conception. In the current work, I review the papers that have tested the three key claims of good enough processing: ambiguities remain unresolved and underspecified, we use heuristics to parse sentences, and deep processing is only carried out if required by the task. I find mixed evidence for these claims and conclude with an appeal to further refinement of the claims and predictions of the theory.
  • Galke, L., Vagliano, I., Franke, B., Zielke, T., & Scherp, A. (2023). Lifelong learning on evolving graphs under the constraints of imbalanced classes and new classes. Neural networks, 164, 156-176. doi:10.1016/j.neunet.2023.04.022.

    Abstract

    Lifelong graph learning deals with the problem of continually adapting graph neural network (GNN) models to changes in evolving graphs. We address two critical challenges of lifelong graph learning in this work: dealing with new classes and tackling imbalanced class distributions. The combination of these two challenges is particularly relevant since newly emerging classes typically resemble only a tiny fraction of the data, adding to the already skewed class distribution. We make several contributions: First, we show that the amount of unlabeled data does not influence the results, which is an essential prerequisite for lifelong learning on a sequence of tasks. Second, we experiment with different label rates and show that our methods can perform well with only a tiny fraction of annotated nodes. Third, we propose the gDOC method to detect new classes under the constraint of having an imbalanced class distribution. The critical ingredient is a weighted binary cross-entropy loss function to account for the class imbalance. Moreover, we demonstrate combinations of gDOC with various base GNN models such as GraphSAGE, Simplified Graph Convolution, and Graph Attention Networks. Lastly, our k-neighborhood time difference measure provably normalizes the temporal changes across different graph datasets. With extensive experimentation, we find that the proposed gDOC method is consistently better than a naive adaption of DOC to graphs. Specifically, in experiments using the smallest history size, the out-of-distribution detection score of gDOC is 0.09 compared to 0.01 for DOC. Furthermore, gDOC achieves an Open-F1 score, a combined measure of in-distribution classification and out-of-distribution detection, of 0.33 compared to 0.25 of DOC (32% increase).

    Additional information

    Link to preprint version code datasets

Share this page