Publications

Displaying 301 - 400 of 651
  • Klein, W. (1987). Eine Verschärfung des Entscheidungsproblems. Rechtshistorisches Journal, 6, 209-210.
  • Klein, M., Van Donkelaar, M., Verhoef, E., & Franke, B. (2017). Imaging genetics in neurodevelopmental psychopathology. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 174(5), 485-537. doi:10.1002/ajmg.b.32542.

    Abstract

    Neurodevelopmental disorders are defined by highly heritable problems during development and brain growth. Attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorders (ASDs), and intellectual disability (ID) are frequent neurodevelopmental disorders, with common comorbidity among them. Imaging genetics studies on the role of disease-linked genetic variants on brain structure and function have been performed to unravel the etiology of these disorders. Here, we reviewed imaging genetics literature on these disorders attempting to understand the mechanisms of individual disorders and their clinical overlap. For ADHD and ASD, we selected replicated candidate genes implicated through common genetic variants. For ID, which is mainly caused by rare variants, we included genes for relatively frequent forms of ID occurring comorbid with ADHD or ASD. We reviewed case-control studies and studies of risk variants in healthy individuals. Imaging genetics studies for ADHD were retrieved for SLC6A3/DAT1, DRD2, DRD4, NOS1, and SLC6A4/5HTT. For ASD, studies on CNTNAP2, MET, OXTR, and SLC6A4/5HTT were found. For ID, we reviewed the genes FMR1, TSC1 and TSC2, NF1, and MECP2. Alterations in brain volume, activity, and connectivity were observed. Several findings were consistent across studies, implicating, for example, SLC6A4/5HTT in brain activation and functional connectivity related to emotion regulation. However, many studies had small sample sizes, and hypothesis-based, brain region-specific studies were common. Results from available studies confirm that imaging genetics can provide insight into the link between genes, disease-related behavior, and the brain. However, the field is still in its early stages, and conclusions about shared mechanisms cannot yet be drawn.
  • Klein, W. (1987). L'espressione della temporalita in una varieta elementare di L2. In A. Ramat (Ed.), L'apprendimento spontaneo di una seconda lingua (pp. 131-146). Bologna: Molino.
  • Klein, W., & Von Stutterheim, C. (1987). Quaestio und referentielle Bewegung in Erzählungen. Linguistische Berichte, 109, 163-183.
  • Klein, W. (Ed.). (1987). Sprache und Ritual [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (65).
  • Klepp, A., Niccolai, V., Sieksmeyer, J., Arnzen, S., Indefrey, P., Schnitzler, A., & Biermann-Ruben, K. (2017). Body-part specific interactions of action verb processing with motor behaviour. Behavioural Brain Research, 328, 149-158. doi:10.1016/j.bbr.2017.04.002.

    Abstract

    The interaction of action-related language processing with actual movement is an indicator of the functional role of motor cortical involvement in language understanding. This paper describes two experiments using single action verb stimuli. Motor responses were performed with the hand or the foot. To test the double dissociation of language-motor facilitation effects within subjects, Experiments 1 and 2 used a priming procedure where both hand and foot reactions had to be performed in response to different geometrical shapes, which were preceded by action verbs. In Experiment 1, the semantics of the verbs could be ignored whereas Experiment 2 included semantic decisions. Only Experiment 2 revealed a clear double dissociation in reaction times: reactions were facilitated when preceded by verbs describing actions with the matching effector. In Experiment 1, by contrast, there was an interaction between verb-response congruence and a semantic variable related to motor features of the verbs. Thus, the double dissociation paradigm of semantic motor priming was effective, corroborating the role of the motor system in action-related language processing. Importantly, this effect was body part specific.

    Files private

    Request files
  • Kong, X., Song, Y., Zhen, Z., & Liu, J. (2017). Genetic Variation in S100B Modulates Neural Processing of Visual Scenes in Han Chinese. Cerebral Cortex, 27(2), 1326-1336. doi:10.1093/cercor/bhv322.

    Abstract

    Spatial navigation is a crucial ability for living. Previous animal studies have shown that the S100B gene is causally related to spatial navigation performance in mice. However, the genetic factors influencing human navigation and its neural substrates remain unclear. Here, we provided the first evidence that the S100B gene modulates neural processing of navigationally relevant scenes in humans. First, with a novel protocol, we demonstrated that the spatial pattern of S100B gene expression in postmortem brains was associated with brain activation pattern for spatial navigation in general, and for scene processing in particular. Further, in a large fMRI cohort of healthy adults of Han Chinese (N = 202), we found that S100B gene polymorphisms modulated scene selectivity in the retrosplenial cortex (RSC) and parahippocampal place area. Finally, the serum levels of S100B protein mediated the association between S100B gene polymorphism and scene selectivity in the RSC. Our study takes the first step toward understanding the neurogenetic mechanism of human spatial navigation and suggests a novel approach to discover candidate genes modulating cognitive functions.

    Additional information

    S1_300.png
  • Kong, X., Wang, X., Pu, Y., Huang, L., Hao, X., Zhen, Z., & Liu, J. (2017). Human navigation network: The intrinsic functional organization and behavioral relevance. Brain Structure and Function, 222(2), 749-764. doi:10.1007/s00429-016-1243-8.

    Abstract

    Spatial navigation is a crucial ability for living. Previous work has revealed multiple distributed brain regions associated with human navigation. However, little is known about how these regions work together as a network (referred to as navigation network) to support flexible navigation. In a novel protocol, we combined neuroimaging meta-analysis, and functional connectivity and behavioral data from the same subjects. Briefly, we first constructed the navigation network for each participant, by combining a large-scale neuroimaging meta-analysis (with the Neurosynth) and resting-state functional magnetic resonance imaging. Then, we investigated multiple topological properties of the navigation networks, including small-worldness, modularity, and highly connected hubs. Finally, we explored the behavioral relevance of these intrinsic properties in a large sample of healthy young adults (N = 190). We found that navigation networks showed small-world and modular organization at global level. More importantly, we found that increased small-worldness and modularity of the navigation network were associated with better navigation ability. Finally, we found that the right retrosplenial complex (RSC) acted as one of the hubs in the navigation network, and that higher betweenness of this region correlated with better navigation ability, suggesting a critical role of the RSC in modulating the navigation network in human brain. Our study takes one of the first steps toward understanding the underlying organization of the navigation network. Moreover, these findings suggest the potential applications of the novel approach to investigating functionally meaningful networks in human brain and their relations to the behavioral impairments in the aging and psychiatric patients.
  • Kong, X., Huang, Y., Hu, S., & Liu, J. (2017). Sex-linked association between cortical scene selectivity and navigational ability. NeuroImage, 158, 397-405. doi:10.1016/j.neuroimage.2017.07.031.

    Abstract

    Spatial navigation is a crucial ability for living. Previous studies have shown that males are better at navigation than females, but little is known about the neural basis underlying the sex differences. In this study, we investigated whether cortical scene processing in three well-established scene-selective regions was sexually different, by examining sex differences in scene selectivity and its behavioral relevance to navigation. To do this, we used functional magnetic resonance imaging (fMRI) to scan the parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA) in a large cohort of healthy young adults viewing navigationally relevant scenes (N = 202), and correlated their neural selectivity to scenes with their self-reported navigational ability. Behaviorally, we replicated the previous finding that males were better at navigation than females. Neurally, we found that the scene selectivity in the bilateral PPA, not in the RSC or OPA, was significantly higher in males than females. Such differences could not be explained by confounding factors including brain size and fMRI data quality. Importantly, males, not females, with stronger scene selectivity in the left PPA possessed better navigational ability. This brain-behavior association could not be accounted for by non-navigational abilities (i.e., intelligence and mental rotation ability). Overall, our study provides novel empirical evidence demonstrating sex differences in the brain activity, inviting further studies on sex differences in the neural network for spatial navigation.

    Additional information

    1-s2.0-S1053811917305992-mmc1.docx
  • Kornfeld, L., & Rossi, G. (2023). Enforcing rules during play: Knowledge, agency, and the design of instructions and reminders. Research on Language and Social Interaction, 56(1), 42-64. doi:10.1080/08351813.2023.2170637.

    Abstract

    Rules of behavior are fundamental to human sociality. Whether on the road, at the dinner table, or during a game, people monitor one another’s behavior for conformity to rules and may take action to rectify violations. In this study, we examine two ways in which rules are enforced during games: instructions and reminders. Building on prior research, we identify instructions as actions produced to rectify violations based on another’s lack of knowledge of the relevant rule; knowledge that the instruction is designed to impart. In contrast to this, the actions we refer to as reminders are designed to enforce rules presupposing the transgressor’s competence and treating the violation as the result of forgetfulness or oversight. We show that instructing and reminding actions differ in turn design, sequential development, the epistemic stances taken by transgressors and enforcers, and in how the action affects the progressivity of the interaction. Data are in German and Italian from the Parallel European Corpus of Informal Interaction (PECII).
  • Kösem, A., & Van Wassenhove, V. (2017). Distinct contributions of low and high frequency neural oscillations to speech comprehension. Language, Cognition and Neuroscience, 32(5), 536-544. doi:10.1080/23273798.2016.1238495.

    Abstract

    In the last decade, the involvement of neural oscillatory mechanisms in speech comprehension has been increasingly investigated. Current evidence suggests that low-frequency and high-frequency neural entrainment to the acoustic dynamics of speech are linked to its analysis. One crucial question is whether acoustical processing primarily modulates neural entrainment, or whether entrainment instead reflects linguistic processing. Here, we review studies investigating the effect of linguistic manipulations on neural oscillatory activity. In light of the current findings, we argue that theta (3–8 Hz) entrainment may primarily reflect the analysis of the acoustic features of speech. In contrast, recent evidence suggests that delta (1–3 Hz) and high-frequency activity (>40 Hz) are reliable indicators of perceived linguistic representations. The interdependence between low-frequency and high-frequency neural oscillations, as well as their causal role on speech comprehension, is further discussed with regard to neurophysiological models of speech processing
  • Kösem, A., Dai, B., McQueen, J. M., & Hagoort, P. (2023). Neural envelope tracking of speech does not unequivocally reflect intelligibility. NeuroImage, 272: 120040. doi:10.1016/j.neuroimage.2023.120040.

    Abstract

    During listening, brain activity tracks the rhythmic structures of speech signals. Here, we directly dissociated the contribution of neural envelope tracking in the processing of speech acoustic cues from that related to linguistic processing. We examined the neural changes associated with the comprehension of Noise-Vocoded (NV) speech using magnetoencephalography (MEG). Participants listened to NV sentences in a 3-phase training paradigm: (1) pre-training, where NV stimuli were barely comprehended, (2) training with exposure of the original clear version of speech stimulus, and (3) post-training, where the same stimuli gained intelligibility from the training phase. Using this paradigm, we tested if the neural responses of a speech signal was modulated by its intelligibility without any change in its acoustic structure. To test the influence of spectral degradation on neural envelope tracking independently of training, participants listened to two types of NV sentences (4-band and 2-band NV speech), but were only trained to understand 4-band NV speech. Significant changes in neural tracking were observed in the delta range in relation to the acoustic degradation of speech. However, we failed to find a direct effect of intelligibility on the neural tracking of speech envelope in both theta and delta ranges, in both auditory regions-of-interest and whole-brain sensor-space analyses. This suggests that acoustics greatly influence the neural tracking response to speech envelope, and that caution needs to be taken when choosing the control signals for speech-brain tracking analyses, considering that a slight change in acoustic parameters can have strong effects on the neural tracking response.
  • De Kovel, C. G. F., Lisgo, S., Karlebach, G., Ju, J., Cheng, G., Fisher, S. E., & Francks, C. (2017). Left-right asymmetry of maturation rates in human embryonic neural development. Biological Psychiatry, 82(3), 204-212. doi:10.1016/j.biopsych.2017.01.016.

    Abstract

    Background

    Left-right asymmetry is a fundamental organizing feature of the human brain, and neuro-psychiatric disorders such as schizophrenia sometimes involve alterations of brain asymmetry. As early as 8 weeks post conception, the majority of human fetuses move their right arms more than their left arms, but because nerve fibre tracts are still descending from the forebrain at this stage, spinal-muscular asymmetries are likely to play an important developmental role.
    Methods

    We used RNA sequencing to measure gene expression levels in the left and right spinal cords, and left and right hindbrains, of 18 post-mortem human embryos aged 4-8 weeks post conception. Genes showing embryonic lateralization were tested for an enrichment of signals in genome-wide association data for schizophrenia.
    Results

    The left side of the embryonic spinal cord was found to mature faster than the right side. Both sides transitioned from transcriptional profiles associated with cell division and proliferation at earlier stages, to neuronal differentiation and function at later stages, but the two sides were not in synchrony (p = 2.2 E-161). The hindbrain showed a left-right mirrored pattern compared to the spinal cord, consistent with the well-known crossing over of function between these two structures. Genes that showed lateralization in the embryonic spinal cord were enriched for association signals with schizophrenia (p = 4.3 E-05).
    Conclusions
    These are the earliest-stage left-right differences of human neural development ever reported. Disruption of the lateralised developmental programme may play a role in the genetic susceptibility to schizophrenia.

    Additional information

    mmc1.pdf
  • De Kovel, C. G. F., Syrbe, S., Brilstra, E. H., Verbeek, N., Kerr, B., Dubbs, H., Bayat, A., Desai, S., Naidu, S., Srivastava, S., Cagaylan, H., Yis, U., Saunders, C., Rook, M., Plugge, S., Muhle, H., Afawi, Z., Klein, K. M., Jayaraman, V., Rajagopalan, R. and 15 moreDe Kovel, C. G. F., Syrbe, S., Brilstra, E. H., Verbeek, N., Kerr, B., Dubbs, H., Bayat, A., Desai, S., Naidu, S., Srivastava, S., Cagaylan, H., Yis, U., Saunders, C., Rook, M., Plugge, S., Muhle, H., Afawi, Z., Klein, K. M., Jayaraman, V., Rajagopalan, R., Goldberg, E., Marsh, E., Kessler, S., Bergqvist, C., Conlin, L. K., Krok, B. L., Thiffault, I., Pendziwiat, M., Helbig, I., Polster, T., Borggraefe, I., Lemke, J. R., Van den Boogaardt, M. J., Moller, R. S., & Koeleman, B. P. C. (2017). Neurodevelopmental Disorders Caused by De Novo Variants in KCNB1 Genotypes and Phenotypes. JAMA Neurology, 74(10), 1228-1236. doi:10.1001/jamaneurol.2017.1714.

    Abstract

    Importance Knowing the range of symptoms seen in patients with a missense or loss-of-function variant in KCNB1 and how these symptoms correlate with the type of variant will help clinicians with diagnosis and prognosis when treating new patients. Objectives To investigate the clinical spectrum associated with KCNB1 variants and the genotype-phenotype correlations. Design, Setting, and Participants This study summarized the clinical and genetic information of patients with a presumed pathogenic variant in KCNB1.Patients were identified in research projects or during clinical testing. Information on patients from previously published articles was collected and authors contacted if feasible. All patients were seen at a clinic at one of the participating institutes because of presumed genetic disorder. They were tested in a clinical setting or included in a research project. Main Outcomes and Measures The genetic variant and its inheritance and information on the patient's symptoms and characteristics in a predefined format. All variants were identified with massive parallel sequencing and confirmed with Sanger sequencing in the patient. Absence of the variant in the parents could be confirmed with Sanger sequencing in all families except one. Results Of 26 patients (10 female, 15 male, 1 unknown; mean age at inclusion, 9.8 years; age range, 2-32 years) with developmental delay, 20 (77%) carried a missense variant in the ion channel domain of KCNB1, with a concentration of variants in region S5 to S6. Three variants that led to premature stops were located in the C-terminal and 3 in the ion channel domain. Twenty-one of 25 patients (84%) had seizures, with 9 patients (36%) starting with epileptic spasms between 3 and 18 months of age. All patients had developmental delay, with 17 (65%) experiencing severe developmental delay; 14 (82%) with severe delay had behavioral problems. The developmental delay was milder in 4 of 6 patients with stop variants and in a patient with a variant in the S2 transmembrane element rather than the S4 to S6 region. Conclusions and Relevance De novo KCNB1 missense variants in the ion channel domain and loss-of-function variants in this domain and the C-terminal likely cause neurodevelopmental disorders with or without seizures. Patients with presumed pathogenic variants in KCNB1 have a variable phenotype. However, the type and position of the variants in the protein are (imperfectly) correlated with the severity of the disorder.
  • Kuiper, K., Bimesl, N., Kempen, G., & Ogino, M. (2017). Initial vs. non-initial placement of agent constructions in spoken clauses: A corpus-based study of language production under time pressure. Language Sciences, 64, 16-33. doi:10.1016/j.langsci.2017.06.001.

    Abstract

    In this exploratory study we test the hypothesis that the retrieval from memory of proper noun Agents (PNAs) under processing pressure causes a greater proportion of such semantic arguments to be placed to the right of the initial position in a clause than would be the case if such retrieval from memory were not necessary. This effect is manifest in sports commentary. Processing pressure on sports commentators is modulated by the speed at which the sport is played and reported. Non-initial placement is also facilitated by formulae which have slots in non-initial position. It follows that the non-initial placement of PNAs is not always semantically or pragmatically motivated. This finding therefore runs counter to a strong form of the functionalist hypothesis that syntactic choices available in the systemic structure of the syntax of a language offer solely semantic or pragmatic choices. It is an open question in a weak functionalist account of language and language use how processing and communicative functions interact in general.
  • Kunert, R., & Jongman, S. R. (2017). Entrainment to an auditory signal: Is attention involved? Journal of Experimental Psychology: General, 146(1), 77-88. doi:10.1037/xge0000246.

    Abstract

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment.
  • Kunert, R. (2017). Music and language comprehension in the brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lai, J., Chan, A., & Kidd, E. (2023). Relative clause comprehension in Cantonese-speaking children with and without developmental language disorder. PLoS One, 18: e0288021. doi:10.1371/journal.pone.0288021.

    Abstract

    Developmental Language Disorder (DLD), present in 2 out of every 30 children, affects primarily oral language abilities and development in the absence of associated biomedical conditions. We report the first experimental study that examines relative clause (RC) comprehension accuracy and processing (via looking preference) in Cantonese-speaking children with and without DLD, testing the predictions from competing domain-specific versus domain-general theoretical accounts. We compared children with DLD (N = 22) with their age-matched typically-developing (TD) children (AM-TD, N = 23) aged 6;6–9;7 and language-matched (and younger) TD children (YTD, N = 21) aged 4;7–7;6, using a referent selection task. Within-subject factors were: RC type (subject-RCs (SRCs) versus object-RCs (ORCs); relativizer (classifier (CL) versus relative marker ge3 RCs). Accuracy measures and looking preference to the target were analyzed using generalized linear mixed effects models. Results indicated Cantonese children with DLD scored significantly lower than their AM-TD peers in accuracy and processed RCs significantly slower than AM-TDs, but did not differ from the YTDs on either measure. Overall, while the results revealed evidence of a SRC advantage in the accuracy data, there was no indication of additional difficulty associated with ORCs in the eye-tracking data. All children showed a processing advantage for the frequent CL relativizer over the less frequent ge3 relativizer. These findings pose challenges to domain-specific representational deficit accounts of DLD, which primarily explain the disorder as a syntactic deficit, and are better explained by domain-general accounts that explain acquisition and processing as emergent properties of multiple converging linguistic and non-linguistic processes.

    Additional information

    S1 appendix
  • Lam, N. H. L. (2017). Comprehending comprehension: Insights from neuronal oscillations on the neuronal basis of language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lam, K. J. Y., Bastiaansen, M. C. M., Dijkstra, T., & Rueschemeyer, S. A. (2017). Making sense: motor activation and action plausibility during sentence processing. Language, Cognition and Neuroscience, 32(5), 590-600. doi:10.1080/23273798.2016.1164323.

    Abstract

    The current electroencephalography study investigated the relationship between the motor and (language) comprehension systems by simultaneously measuring mu and N400 effects. Specifically, we examined whether the pattern of motor activation elicited by verbs depends on the larger sentential context. A robust N400 congruence effect confirmed the contextual manipulation of action plausibility, a form of semantic congruency. Importantly, this study showed that: (1) Action verbs elicited more mu power decrease than non-action verbs when sentences described plausible actions. Action verbs thus elicited more motor activation than non-action verbs. (2) In contrast, when sentences described implausible actions, mu activity was present but the difference between the verb types was not observed. The increased processing associated with a larger N400 thus coincided with mu activity in sentences describing implausible actions. Altogether, context-dependent motor activation appears to play a functional role in deriving context-sensitive meaning
  • Laparle, S. (2023). Moving past the lexical affiliate with a frame-based analysis of gesture meaning. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527218.

    Abstract

    Interpreting the meaning of co-speech gesture often involves
    identifying a gesture’s ‘lexical affiliate’, the word or phrase to
    which it most closely relates (Schegloff 1984). Though there is
    work within gesture studies that resists this simplex mapping of
    meaning from speech to gesture (e.g. de Ruiter 2000; Kendon
    2014; Parrill 2008), including an evolving body of literature on
    recurrent gesture and gesture families (e.g. Fricke et al. 2014; Müller 2017), it is still the lexical affiliate model that is most ap-
    parent in formal linguistic models of multimodal meaning(e.g.
    Alahverdzhieva et al. 2017; Lascarides and Stone 2009; Puste-
    jovsky and Krishnaswamy 2021; Schlenker 2020). In this work,
    I argue that the lexical affiliate should be carefully reconsidered
    in the further development of such models.
    In place of the lexical affiliate, I suggest a further shift
    toward a frame-based, action schematic approach to gestural
    meaning in line with that proposed in, for example, Parrill and
    Sweetser (2004) and Müller (2017). To demonstrate the utility
    of this approach I present three types of compositional gesture
    sequences which I call spatial contrast, spatial embedding, and
    cooperative abstract deixis. All three rely on gestural context,
    rather than gesture-speech alignment, to convey interactive (i.e.
    pragmatic) meaning. The centrality of gestural context to ges-
    ture meaning in these examples demonstrates the necessity of
    developing a model of gestural meaning independent of its in-
    tegration with speech.
  • Lee, R., Chambers, C. G., Huettig, F., & Ganea, P. A. (2017). Children’s semantic and world knowledge overrides fictional information during anticipatory linguistic processing. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Meeting of the Cognitive Science Society (CogSci 2017) (pp. 730-735). Austin, TX: Cognitive Science Society.

    Abstract

    Using real-time eye-movement measures, we asked how a fantastical discourse context competes with stored representations of semantic and world knowledge to influence children's and adults' moment-by-moment interpretation of a story. Seven-year- olds were less effective at bypassing stored semantic and world knowledge during real-time interpretation than adults. Nevertheless, an effect of discourse context on comprehension was still apparent.
  • Lee, C., Jessop, A., Bidgood, A., Peter, M. S., Pine, J. M., Rowland, C. F., & Durrant, S. (2023). How executive functioning, sentence processing, and vocabulary are related at 3 years of age. Journal of Experimental Child Psychology, 233: 105693. doi:10.1016/j.jecp.2023.105693.

    Abstract

    There is a wealth of evidence demonstrating that executive function (EF) abilities are positively associated with language development during the preschool years, such that children with good executive functions also have larger vocabularies. However, why this is the case remains to be discovered. In this study, we focused on the hypothesis that sentence processing abilities mediate the association between EF skills and receptive vocabulary knowledge, in that the speed of language acquisition is at least partially dependent on a child’s processing ability, which is itself dependent on executive control. We tested this hypothesis in longitudinal data from a cohort of 3- and 4-year-old children at three age points (37, 43, and 49 months). We found evidence, consistent with previous research, for a significant association between three EF skills (cognitive flexibility, working memory [as measured by the Backward Digit Span], and inhibition) and receptive vocabulary knowledge across this age range. However, only one of the tested sentence processing abilities (the ability to maintain multiple possible referents in mind) significantly mediated this relationship and only for one of the tested EFs (inhibition). The results suggest that children who are better able to inhibit incorrect responses are also better able to maintain multiple possible referents in mind while a sentence unfolds, a sophisticated sentence processing ability that may facilitate vocabulary learning from complex input.

    Additional information

    table S1 code and data
  • Lehecka, T. (2023). Normative ratings for 111 Swedish nouns and corresponding picture stimuli. Nordic Journal of Linguistics, 46(1), 20-45. doi:10.1017/S0332586521000123.

    Abstract

    Normative ratings are a means to control for the effects of confounding variables in psycholinguistic experiments. This paper introduces a new dataset of normative ratings for Swedish encompassing 111 concrete nouns and the corresponding picture stimuli in the MultiPic database (Duñabeitia et al. 2017). The norms for name agreement, category typicality, age of acquisition and subjective frequency were collected using online surveys among native speakers of the Finland-Swedish variety of Swedish. The paper discusses the inter-correlations between these variables and compares them against available ratings for other languages. In doing so, the paper argues that ratings for age of acquisition and subjective frequency collected for other languages may be applied to psycholinguistic studies on Finland-Swedish, at least with respect to concrete and highly imageable nouns. In contrast, norms for name agreement should be collected from speakers of the same language variety as represented by the subjects in the actual experiments.
  • Lei, A., Willems, R. M., & Eekhof, L. S. (2023). Emotions, fast and slow: Processing of emotion words is affected by individual differences in need for affect and narrative absorption. Cognition and Emotion, 37(5), 997-1005. doi:10.1080/02699931.2023.2216445.

    Abstract

    Emotional words have consistently been shown to be processed differently than neutral words. However, few studies have examined individual variability in emotion word processing with longer, ecologically valid stimuli (beyond isolated words, sentences, or paragraphs). In the current study, we re-analysed eye-tracking data collected during story reading to reveal how individual differences in need for affect and narrative absorption impact the speed of emotion word reading. Word emotionality was indexed by affective-aesthetic potentials (AAP) calculated by a sentiment analysis tool. We found that individuals with higher levels of need for affect and narrative absorption read positive words more slowly. On the other hand, these individual differences did not influence the reading time of more negative words, suggesting that high need for affect and narrative absorption are characterised by a positivity bias only. In general, unlike most previous studies using more isolated emotion word stimuli, we observed a quadratic (U-shaped) effect of word emotionality on reading speed, such that both positive and negative words were processed more slowly than neutral words. Taken together, this study emphasises the importance of taking into account individual differences and task context when studying emotion word processing.
  • Lemaitre, H., Le Guen, Y., Tilot, A. K., Stein, J. L., Philippe, C., Mangin, J.-F., Fisher, S. E., & Frouin, V. (2023). Genetic variations within human gained enhancer elements affect human brain sulcal morphology. NeuroImage, 265: 119773. doi:10.1016/j.neuroimage.2022.119773.

    Abstract

    The expansion of the cerebral cortex is one of the most distinctive changes in the evolution of the human brain. Cortical expansion and related increases in cortical folding may have contributed to emergence of our capacities for high-order cognitive abilities. Molecular analysis of humans, archaic hominins, and non-human primates has allowed identification of chromosomal regions showing evolutionary changes at different points of our phylogenetic history. In this study, we assessed the contributions of genomic annotations spanning 30 million years to human sulcal morphology measured via MRI in more than 18,000 participants from the UK Biobank. We found that variation within brain-expressed human gained enhancers, regulatory genetic elements that emerged since our last common ancestor with Old World monkeys, explained more trait heritability than expected for the left and right calloso-marginal posterior fissures and the right central sulcus. Intriguingly, these are sulci that have been previously linked to the evolution of locomotion in primates and later on bipedalism in our hominin ancestors.

    Additional information

    tables
  • Lev-Ari, S., & Shao, Z. (2017). How social network heterogeneity facilitates lexical access and lexical prediction. Memory & Cognition, 45(3), 528-538. doi:10.3758/s13421-016-0675-y.

    Abstract

    People learn language from their social environment. As individuals differ in their social networks, they might be exposed to input with different lexical distributions, and these might influence their linguistic representations and lexical choices. In this article we test the relation between linguistic performance and 3 social network properties that should influence input variability, namely, network size, network heterogeneity, and network density. In particular, we examine how these social network properties influence lexical prediction, lexical access, and lexical use. To do so, in Study 1, participants predicted how people of different ages would name pictures, and in Study 2 participants named the pictures themselves. In both studies, we examined how participants’ social network properties related to their performance. In Study 3, we ran simulations on norms we collected to see how age variability in one’s network influences the distribution of different names in the input. In all studies, network age heterogeneity influenced performance leading to better prediction, faster response times for difficult-to-name items, and less entropy in input distribution. These results suggest that individual differences in social network properties can influence linguistic behavior. Specifically, they show that having a more heterogeneous network is associated with better performance. These results also show that the same factors influence lexical prediction and lexical production, suggesting the two might be related.
  • Lev-Ari, S., & Peperkamp, S. (2017). Language for $200: Success in the environment influences grammatical alignment. Journal of Language Evolution, 2(2), 177-187. doi:10.1093/jole/lzw012.

    Abstract

    Speakers constantly learn language from the environment by sampling their linguistic input and adjusting their representations accordingly. Logically, people should attend more to the environment and adjust their behavior in accordance with it more the lower their success in the environment is. We test whether the learning of linguistic input follows this general principle in two studies: a corpus analysis of a TV game show, Jeopardy, and a laboratory task modeled after Go Fish. We show that lower (non-linguistic) success in the task modulates learning of and reliance on linguistic patterns in the environment. In Study 1, we find that poorer performance increases conformity with linguistic norms, as reflected by increased preference for frequent grammatical structures. In Study 2, which consists of a more interactive setting, poorer performance increases learning from the immediate social environment, as reflected by greater repetition of others’ grammatical structures. We propose that these results have implications for models of language production and language learning and for the propagation of language change. In particular, they suggest that linguistic changes might spread more quickly in times of crisis, or when the gap between more and less successful people is larger. The results might also suggest that innovations stem from successful individuals while their propagation would depend on relatively less successful individuals. We provide a few historical examples that are in line with the first suggested implication, namely, that the spread of linguistic changes is accelerated during difficult times, such as war time and an economic downturn
  • Lev-Ari, S., van Heugten, M., & Peperkamp, S. (2017). Relative difficulty of understanding foreign accents as a marker of proficiency. Cognitive Science, 41(4), 1106-1118. doi:10.1111/cogs.12394.

    Abstract

    Foreign-accented speech is generally harder to understand than native-accented speech. This difficulty is reduced for non-native listeners who share their first language with the non-native speaker. It is currently unclear, however, how non-native listeners deal with foreign-accented speech produced by speakers of a different language. We show that the process of (second) language acquisition is associated with an increase in the relative difficulty of processing foreign-accented speech. Therefore, experiencing greater relative difficulty with foreign-accented speech compared with native speech is a marker of language proficiency. These results contribute to our understanding of how phonological categories are acquired during second language learning.
  • Lev-Ari, S. (2017). Talking to fewer people leads to having more malleable linguistic representations. PLoS One, 12(8): e0183593. doi:10.1371/journal.pone.0183593.

    Abstract

    We learn language from our social environment. In general, the more sources we have, the less informative each of them is, and the less weight we should assign it. If this is the case, people who interact with fewer others should be more susceptible to the influence of each of their interlocutors. This paper tests whether indeed people who interact with fewer other people have more malleable phonological representations. Using a perceptual learning paradigm, this paper shows that individuals who regularly interact with fewer others are more likely to change their boundary between /d/ and /t/ following exposure to an atypical speaker. It further shows that the effect of number of interlocutors is not due to differences in ability to learn the speaker’s speech patterns, but specific to likelihood of generalizing the learned pattern. These results have implications for both language learning and language change, as they suggest that individuals with smaller social networks might play an important role in propagating linguistic changes.

    Additional information

    5343619.zip
  • Levelt, W. J. M. (1987). Hochleistung in Millisekunden - Sprechen und Sprache verstehen. In Jahrbuch der Max-Planck-Gesellschaft (pp. 61-77). Göttingen: Vandenhoeck & Ruprecht.
  • Levelt, W. J. M., & d'Arcais, F. (1987). Snelheid en uniciteit bij lexicale toegang. In H. Crombag, L. Van der Kamp, & C. Vlek (Eds.), De psychologie voorbij: Ontwikkelingen rond model, metriek en methode in de gedragswetenschappen (pp. 55-68). Lisse: Swets & Zeitlinger.
  • Levelt, W. J. M., & Schriefers, H. (1987). Stages of lexical access. In G. A. Kempen (Ed.), Natural language generation: new results in artificial intelligence, psychology and linguistics (pp. 395-404). Dordrecht: Nijhoff.
  • Levinson, S. C. (1987). Implicature explicated? [Comment on Sperber and Wilson]. Behavioral and Brain Sciences, 10(4), 722-723.

    Abstract

    Comment on Sperber and Wilson
  • Levinson, S. C. (1987). Minimization and conversational inference. In M. Bertuccelli Papi, & J. Verschueren (Eds.), The pragmatic perspective: Selected papers from the 1985 International Pragmatics Conference (pp. 61-129). Benjamins.
  • Levinson, S. C. (2017). Living with Manny's dangerous idea. In G. Raymond, G. H. Lerner, & J. Heritage (Eds.), Enabling human conduct: Studies of talk-in-interaction in honor of Emanuel A. Schegloff (pp. 327-349). Amsterdam: Benjamins.
  • Levinson, S. C. (2017). Speech acts. In Y. Huang (Ed.), Oxford handbook of pragmatics (pp. 199-216). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199697960.013.22.

    Abstract

    The essential insight of speech act theory was that when we use language, we perform actions—in a more modern parlance, core language use in interaction is a form of joint action. Over the last thirty years, speech acts have been relatively neglected in linguistic pragmatics, although important work has been done especially in conversation analysis. Here we review the core issues—the identifying characteristics, the degree of universality, the problem of multiple functions, and the puzzle of speech act recognition. Special attention is drawn to the role of conversation structure, probabilistic linguistic cues, and plan or sequence inference in speech act recognition, and to the centrality of deep recursive structures in sequences of speech acts in conversation

    Files private

    Request files
  • Levinson, S. C. (1987). Pragmatics and the grammar of anaphora: A partial pragmatic reduction of Binding and Control phenomena. Journal of Linguistics, 23, 379-434. doi:10.1017/S0022226700011324.

    Abstract

    This paper is one in a series that develops a pragmatic framework in loose confederation with Jay Atlas and Larry Horn: thus they may or may not be responsible for the ideas contained herein. Jay Atlas provided many comments which I have utilized or perverted as the case may be. The Australian data to which this framework is applied was collected with the financial and personal assistance of many people and agencies acknowledged separately below; but I must single out for special thanks John Haviland, who recommended the study of Guugu Yimidhirr anaphora to me and upon whose grammatical work on Guugu Yimidhirr this paper is but a minor (and perhaps flawed) elaboration. A grant from the British Academy allowed me to visit Haviland in September 1986 to discuss many aspects of Guugu Yimidhirr with him, and I am most grateful to the Academy for funding this trip and to Haviland for generously making available his time, his texts (from which I have drawn many examples, not always with specific acknowledgement) and most especially his expertise. Where I have diverged from his opinion I may well learn to regret it. I must also thank Nigel Vincent for putting me in touch with a number of recent relevant developments in syntax (only some of which I have been able to address) and for suggestions for numerous improvements. In addition, I have benefited immensely for comments on a distinct but related paper (Levinson, 1987) kindly provided by Jay Atlas, John Haviland, John Heritage, Phil Johnson-Laird, John Lyons, Tanya Reinhart, Emanuel Schegloff and an anonymous referee; and from comments on this paper by participants in the Cambridge Linguistics Department seminar where it was first presented (especial thanks to John Lyons and Huang Yan for further comments, and Mary Smith for a counter-example). Despite all this help, there are sure to be errors of data and analysis that I have persisted in. Aid in gathering the Australian data is acknowledged separately below.
  • Levinson, S. C. (2023). On cognitive artifacts. In R. Feldhay (Ed.), The evolution of knowledge: A scientific meeting in honor of Jürgen Renn (pp. 59-78). Berlin: Max Planck Institute for the History of Science.

    Abstract

    Wearing the hat of a cognitive anthropologist rather than an historian, I will try to amplify the ideas of Renn’s cited above. I argue that a particular subclass of material objects, namely “cognitive artifacts,” involves a close coupling of mind and artifact that acts like a brain prosthesis. Simple cognitive artifacts are external objects that act as aids to internal
    computation, and not all cultures have extended inventories of these. Cognitive artifacts in this sense (e.g., calculating or measuring devices) have clearly played a central role in the history of science. But the notion can be widened to take in less material externalizations of cognition, like writing and language itself. A critical question here is how and why this close coupling of internal computation and external device actually works, a rather neglected question to which I’ll suggest some answers.

    Additional information

    link to book
  • Levinson, S. C. (2023). Gesture, spatial cognition and the evolution of language. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210481. doi:10.1098/rstb.2021.0481.

    Abstract

    Human communication displays a striking contrast between the diversity of languages and the universality of the principles underlying their use in conversation. Despite the importance of this interactional base, it is not obvious that it heavily imprints the structure of languages. However, a deep-time perspective suggests that early hominin communication was gestural, in line with all the other Hominidae. This gestural phase of early language development seems to have left its traces in the way in which spatial concepts, implemented in the hippocampus, provide organizing principles at the heart of grammar.
  • Levshina, N. (2023). Communicative efficiency: Language structure and use. Cambridge: Cambridge University Press.

    Abstract

    All living beings try to save effort, and humans are no exception. This groundbreaking book shows how we save time and energy during communication by unconsciously making efficient choices in grammar, lexicon and phonology. It presents a new theory of 'communicative efficiency', the idea that language is designed to be as efficient as possible, as a system of communication. The new framework accounts for the diverse manifestations of communicative efficiency across a typologically broad range of languages, using various corpus-based and statistical approaches to explain speakers' bias towards efficiency. The author's unique interdisciplinary expertise allows her to provide rich evidence from a broad range of language sciences. She integrates diverse insights from over a hundred years of research into this comprehensible new theory, which she presents step-by-step in clear and accessible language. It is essential reading for language scientists, cognitive scientists and anyone interested in language use and communication.
  • Levshina, N., Namboodiripad, S., Allassonnière-Tang, M., Kramer, M., Talamo, L., Verkerk, A., Wilmoth, S., Garrido Rodriguez, G., Gupton, T. M., Kidd, E., Liu, Z., Naccarato, C., Nordlinger, R., Panova, A., & Stoynova, N. (2023). Why we need a gradient approach to word order. Linguistics, 61(4), 825-883. doi:10.1515/ling-2021-0098.

    Abstract

    This article argues for a gradient approach to word order, which treats word order preferences, both within and across languages, as a continuous variable. Word order variability should be regarded as a basic assumption, rather than as something exceptional. Although this approach follows naturally from the emergentist usage-based view of language, we argue that it can be beneficial for all frameworks and linguistic domains, including language acquisition, processing, typology, language contact, language evolution and change, and formal approaches. Gradient approaches have been very fruitful in some domains, such as language processing, but their potential is not fully realized yet. This may be due to practical reasons. We discuss the most pressing methodological challenges in corpus-based and experimental research of word order and propose some practical solutions.
  • Levshina, N. (2023). Testing communicative and learning biases in a causal model of language evolution:A study of cues to Subject and Object. In M. Degano, T. Roberts, G. Sbardolini, & M. Schouwstra (Eds.), The Proceedings of the 23rd Amsterdam Colloquium (pp. 383-387). Amsterdam: University of Amsterdam.
  • Levshina, N. (2023). Word classes in corpus linguistics. In E. Van Lier (Ed.), The Oxford handbook of word classes (pp. 833-850). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198852889.013.34.

    Abstract

    Word classes play a central role in corpus linguistics under the name of parts of speech (POS). Many popular corpora are provided with POS tags. This chapter gives examples of popular tagsets and discusses the methods of automatic tagging. It also considers bottom-up approaches to POS induction, which are particularly important for the ‘poverty of stimulus’ debate in language acquisition research. The choice of optimal POS tagging involves many difficult decisions, which are related to the level of granularity, redundancy at different levels of corpus annotation, cross-linguistic applicability, language-specific descriptive adequacy, and dealing with fuzzy boundaries between POS. The chapter also discusses the problem of flexible word classes and demonstrates how corpus data with POS tags and syntactic dependencies can be used to quantify the level of flexibility in a language.
  • Lewis, A. G., Schoffelen, J.-M., Hoffmann, C., Bastiaansen, M. C. M., & Schriefers, H. (2017). Discourse-level semantic coherence influences beta oscillatory dynamics and the N400 during sentence comprehension. Language, Cognition and Neuroscience, 32(5), 601-617. doi:10.1080/23273798.2016.1211300.

    Abstract

    In this study, we used electroencephalography to investigate the influence of discourse-level semantic coherence on electrophysiological signatures of local sentence-level processing. Participants read groups of four sentences that could either form coherent stories or were semantically unrelated. For semantically coherent discourses compared to incoherent ones, the N400 was smaller at sentences 2–4, while the visual N1 was larger at the third and fourth sentences. Oscillatory activity in the beta frequency range (13–21 Hz) was higher for coherent discourses. We relate the N400 effect to a disruption of local sentence-level semantic processing when sentences are unrelated. Our beta findings can be tentatively related to disruption of local sentence-level syntactic processing, but it cannot be fully ruled out that they are instead (or also) related to disrupted local sentence-level semantic processing. We conclude that manipulating discourse-level semantic coherence does have an effect on oscillatory power related to local sentence-level processing.
  • Lewis, A. G. (2017). Explorations of beta-band neural oscillations during language comprehension: Sentence processing and beyond. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lewis, A. G., Schoffelen, J.-M., Bastiaansen, M., & Schriefers, H. (2023). Is beta in agreement with the relatives? Using relative clause sentences to investigate MEG beta power dynamics during sentence comprehension. Psychophysiology, 60(10): e14332. doi:10.1111/psyp.14332.

    Abstract

    There remains some debate about whether beta power effects observed during sentence comprehension reflect ongoing syntactic unification operations (beta-syntax hypothesis), or instead reflect maintenance or updating of the sentence-level representation (beta-maintenance hypothesis). In this study, we used magnetoencephalography to investigate beta power neural dynamics while participants read relative clause sentences that were initially ambiguous between a subject- or an object-relative reading. An additional condition included a grammatical violation at the disambiguation point in the relative clause sentences. The beta-maintenance hypothesis predicts a decrease in beta power at the disambiguation point for unexpected (and less preferred) object-relative clause sentences and grammatical violations, as both signal a need to update the sentence-level representation. While the beta-syntax hypothesis also predicts a beta power decrease for grammatical violations due to a disruption of syntactic unification operations, it instead predicts an increase in beta power for the object-relative clause condition because syntactic unification at the point of disambiguation becomes more demanding. We observed decreased beta power for both the agreement violation and object-relative clause conditions in typical left hemisphere language regions, which provides compelling support for the beta-maintenance hypothesis. Mid-frontal theta power effects were also present for grammatical violations and object-relative clause sentences, suggesting that violations and unexpected sentence interpretations are registered as conflicts by the brain's domain-general error detection system.

    Additional information

    data
  • Liesenfeld, A., Lopez, A., & Dingemanse, M. (2023). Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators. In CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces. doi:10.1145/3571884.3604316.

    Abstract

    Large language models that exhibit instruction-following behaviour represent one of the biggest recent upheavals in conversational interfaces, a trend in large part fuelled by the release of OpenAI's ChatGPT, a proprietary large language model for text generation fine-tuned through reinforcement learning from human feedback (LLM+RLHF). We review the risks of relying on proprietary software and survey the first crop of open-source projects of comparable architecture and functionality. The main contribution of this paper is to show that openness is differentiated, and to offer scientific documentation of degrees of openness in this fast-moving field. We evaluate projects in terms of openness of code, training data, model weights, RLHF data, licensing, scientific documentation, and access methods. We find that while there is a fast-growing list of projects billing themselves as 'open source', many inherit undocumented data of dubious legality, few share the all-important instruction-tuning (a key site where human labour is involved), and careful scientific documentation is exceedingly rare. Degrees of openness are relevant to fairness and accountability at all points, from data collection and curation to model architecture, and from training and fine-tuning to release and deployment.
  • Liesenfeld, A., Lopez, A., & Dingemanse, M. (2023). The timing bottleneck: Why timing and overlap are mission-critical for conversational user interfaces, speech recognition and dialogue systems. In Proceedings of the 24rd Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDial 2023). doi:10.18653/v1/2023.sigdial-1.45.

    Abstract

    Speech recognition systems are a key intermediary in voice-driven human-computer interaction. Although speech recognition works well for pristine monologic audio, real-life use cases in open-ended interactive settings still present many challenges. We argue that timing is mission-critical for dialogue systems, and evaluate 5 major commercial ASR systems for their conversational and multilingual support. We find that word error rates for natural conversational data in 6 languages remain abysmal, and that overlap remains a key challenge (study 1). This impacts especially the recognition of conversational words (study 2), and in turn has dire consequences for downstream intent recognition (study 3). Our findings help to evaluate the current state of conversational ASR, contribute towards multidimensional error analysis and evaluation, and identify phenomena that need most attention on the way to build robust interactive speech technologies.
  • Lingwood, J., Lampropoulou, S., De Bezena, C., Billington, J., & Rowland, C. F. (2023). Children’s engagement and caregivers’ use of language-boosting strategies during shared book reading: A mixed methods approach. Journal of Child Language, 50(6), 1436-1458. doi:10.1017/S0305000922000290.

    Abstract

    For shared book reading to be effective for language development, the adult and child need to be highly engaged. The current paper adopted a mixed-methods approach to investigate caregiver’s language-boosting behaviours and children’s engagement during shared book reading. The results revealed there were more instances of joint attention and caregiver’s use of prompts during moments of higher engagement. However, instances of most language-boosting behaviours were similar across episodes of higher and lower engagement. Qualitative analysis assessing the link between children’s engagement and caregiver’s use of speech acts, revealed that speech acts do seem to contribute to high engagement, in combination with other aspects of the interaction.
  • Little, H., Eryilmaz, K., & de Boer, B. (2017). Conventionalisation and Discrimination as Competing Pressures on Continuous Speech-like Signals. Interaction studies, 18(3), 355-378. doi:10.1075/is.18.3.04lit.

    Abstract

    Arbitrary communication systems can emerge from iconic beginnings through processes of conventionalisation via interaction. Here, we explore whether this process of conventionalisation occurs with continuous, auditory signals. We conducted an artificial signalling experiment. Participants either created signals for themselves, or for a partner in a communication game. We found no evidence that the speech-like signals in our experiment became less iconic or simpler through interaction. We hypothesise that the reason for our results is that when it is difficult to be iconic initially because of the constraints of the modality, then iconicity needs to emerge to enable grounding before conventionalisation can occur. Further, pressures for discrimination, caused by the expanding meaning space in our study, may cause more complexity to emerge, again as a result of the restrictive signalling modality. Our findings have possible implications for the processes of conventionalisation possible in signed and spoken languages, as the spoken modality is more restrictive than the manual modality.
  • Little, H., Rasilo, H., van der Ham, S., & Eryılmaz, K. (2017). Empirical approaches for investigating the origins of structure in speech. Interaction studies, 18(3), 332-354. doi:10.1075/is.18.3.03lit.

    Abstract

    In language evolution research, the use of computational and experimental methods to investigate the emergence of structure in language is exploding. In this review, we look exclusively at work exploring the emergence of structure in speech, on both a categorical level (what drives the emergence of an inventory of individual speech sounds), and a combinatorial level (how these individual speech sounds emerge and are reused as part of larger structures). We show that computational and experimental methods for investigating population-level processes can be effectively used to explore and measure the effects of learning, communication and transmission on the emergence of structure in speech. We also look at work on child language acquisition as a tool for generating and validating hypotheses for the emergence of speech categories. Further, we review the effects of noise, iconicity and production effects.
  • Little, H. (2017). Introduction to the Special Issue on the Emergence of Sound Systems. Journal of Language Evolution, 2(1), 1-3. doi:10.1093/jole/lzx014.

    Abstract

    How did human sound systems get to be the way they are? Collecting contributions implementing a wealth of methods to address this question, this special issue treats language and speech as being the result of a complex adaptive system. The work throughout provides evidence and theory at the levels of phylogeny, glossogeny and ontogeny. In taking a multi-disciplinary approach that considers interactions within and between these levels of selection, the papers collectively provide a valuable, integrated contribution to existing work on the evolution of speech and sound systems.
  • Little, H., Perlman, M., & Eryilmaz, K. (2017). Repeated interactions can lead to more iconic signals. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 760-765). Austin, TX: Cognitive Science Society.

    Abstract

    Previous research has shown that repeated interactions can cause iconicity in signals to reduce. However, data from several recent studies has shown the opposite trend: an increase in iconicity as the result of repeated interactions. Here, we discuss whether signals may become less or more iconic as a result of the modality used to produce them. We review several recent experimental results before presenting new data from multi-modal signals, where visual input creates audio feedback. Our results show that the growth in iconicity present in the audio information may come at a cost to iconicity in the visual information. Our results have implications for how we think about and measure iconicity in artificial signalling experiments. Further, we discuss how iconicity in real world speech may stem from auditory, kinetic or visual information, but iconicity in these different modalities may conflict.
  • Little, H., Eryılmaz, K., & de Boer, B. (2017). Signal dimensionality and the emergence of combinatorial structure. Cognition, 168, 1-15. doi:10.1016/j.cognition.2017.06.011.

    Abstract

    In language, a small number of meaningless building blocks can be combined into an unlimited set of meaningful utterances. This is known as combinatorial structure. One hypothesis for the initial emergence of combinatorial structure in language is that recombining elements of signals solves the problem of overcrowding in a signal space. Another hypothesis is that iconicity may impede the emergence of combinatorial structure. However, how these two hypotheses relate to each other is not often discussed. In this paper, we explore how signal space dimensionality relates to both overcrowding in the signal space and iconicity. We use an artificial signalling experiment to test whether a signal space and a meaning space having similar topologies will generate an iconic system and whether, when the topologies differ, the emergence of combinatorially structured signals is facilitated. In our experiments, signals are created from participants' hand movements, which are measured using an infrared sensor. We found that participants take advantage of iconic signal-meaning mappings where possible. Further, we use trajectory predictability, measures of variance, and Hidden Markov Models to measure the use of structure within the signals produced and found that when topologies do not match, then there is more evidence of combinatorial structure. The results from these experiments are interpreted in the context of the differences between the emergence of combinatorial structure in different linguistic modalities (speech and sign).

    Additional information

    mmc1.zip
  • Little, H. (Ed.). (2017). Special Issue on the Emergence of Sound Systems [Special Issue]. The Journal of Language Evolution, 2(1).
  • Lockwood, G. (2017). Talking sense: The behavioural and neural correlates of sound symbolism. PhD Thesis, Radboud University, Nijmegen.
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., & Willems, R. M. (2017). Using stochastic language models (SLM) to map lexical, syntactic, and phonological information processing in the brain. PLoS One, 12(5): e0177794. doi:10.1371/journal.pone.0177794.

    Abstract

    Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.

    Additional information

    Data availability
  • Lumaca, M., Bonetti, L., Brattico, E., Baggio, G., Ravignani, A., & Vuust, P. (2023). High-fidelity transmission of auditory symbolic material is associated with reduced right–left neuroanatomical asymmetry between primary auditory regions. Cerebral Cortex, 33(11), 6902-6919. doi:10.1093/cercor/bhad009.

    Abstract

    The intergenerational stability of auditory symbolic systems, such as music, is thought to rely on brain processes that allow the faithful transmission of complex sounds. Little is known about the functional and structural aspects of the human brain which support this ability, with a few studies pointing to the bilateral organization of auditory networks as a putative neural substrate. Here, we further tested this hypothesis by examining the role of left–right neuroanatomical asymmetries between auditory cortices. We collected neuroanatomical images from a large sample of participants (nonmusicians) and analyzed them with Freesurfer’s surface-based morphometry method. Weeks after scanning, the same individuals participated in a laboratory experiment that simulated music transmission: the signaling games. We found that high accuracy in the intergenerational transmission of an artificial tone system was associated with reduced rightward asymmetry of cortical thickness in Heschl’s sulcus. Our study suggests that the high-fidelity copying of melodic material may rely on the extent to which computational neuronal resources are distributed across hemispheres. Our data further support the role of interhemispheric brain organization in the cultural transmission and evolution of auditory symbolic systems.
  • Magyari, L., De Ruiter, J. P., & Levinson, S. C. (2017). Temporal preparation for speaking in question-answer sequences. Frontiers in Psychology, 8: 211. doi:10.3389/fpsyg.2017.00211.

    Abstract

    In every-day conversations, the gap between turns of conversational partners is most frequently between 0 and 200 ms. We were interested how speakers achieve such fast transitions. We designed an experiment in which participants listened to pre-recorded questions about images presented on a screen and were asked to answer these questions. We tested whether speakers already prepare their answers while they listen to questions and whether they can prepare for the time of articulation by anticipating when questions end. In the experiment, it was possible to guess the answer at the beginning of the questions in half of the experimental trials. We also manipulated whether it was possible to predict the length of the last word of the questions. The results suggest when listeners know the answer early they start speech production already during the questions. Speakers can also time when to speak by predicting the duration of turns. These temporal predictions can be based on the length of anticipated words and on the overall probability of turn durations.

    Additional information

    presentation 1.pdf
  • Mainz, N., Shao, Z., Brysbaert, M., & Meyer, A. S. (2017). Vocabulary Knowledge Predicts Lexical Processing: Evidence from a Group of Participants with Diverse Educational Backgrounds. Frontiers in Psychology, 8: 1164. doi:10.3389/fpsyg.2017.01164.

    Abstract

    Vocabulary knowledge is central to a speaker's command of their language. In previous research, greater vocabulary knowledge has been associated with advantages in language processing. In this study, we examined the relationship between individual differences in vocabulary and language processing performance more closely by (i) using a battery of vocabulary tests instead of just one test, and (ii) testing not only university students (Experiment 1) but young adults from a broader range of educational backgrounds (Experiment 2). Five vocabulary tests were developed, including multiple-choice and open antonym and synonym tests and a definition test, and administered together with two established measures of vocabulary. Language processing performance was measured using a lexical decision task. In Experiment 1, vocabulary and word frequency were found to predict word recognition speed while we did not observe an interaction between the effects. In Experiment 2, word recognition performance was predicted by word frequency and the interaction between word frequency and vocabulary, with high-vocabulary individuals showing smaller frequency effects. While overall the individual vocabulary tests were correlated and showed similar relationships with language processing as compared to a composite measure of all tests, they appeared to share less variance in Experiment 2 than in Experiment 1. Implications of our findings concerning the assessment of vocabulary size in individual differences studies and the investigation of individuals from more varied backgrounds are discussed.

    Additional information

    Supplementary Material Appendices.pdf
  • Majid, A., & Enfield, N. J. (2017). Body. In H. Burkhardt, J. Seibt, G. Imaguire, & S. Gerogiorgakis (Eds.), Handbook of mereology (pp. 100-103). Munich: Philosophia.
  • Majid, A., Manko, P., & De Valk, J. (2017). Language of the senses. In S. Dekker (Ed.), Scientific breakthroughs in the classroom! (pp. 40-76). Nijmegen: Science Education Hub Radboud University.

    Abstract

    The project that we describe in this chapter has the theme ‘Language of the senses’. This theme is
    based on the research of Asifa Majid and her team regarding the influence of language and culture on
    sensory perception. The chapter consists of two sections. Section 2.1 describes how different sensory
    perceptions are spoken of in different languages. Teachers can use this section as substantive preparation
    before they launch this theme in the classroom. Section 2.2 describes how teachers can handle
    this theme in accordance with the seven phases of inquiry-based learning. Chapter 1, in which the
    general guideline of the seven phases is described, forms the basis for this. We therefore recommend
    the use of chapter 1 as the starting point for the execution of a project in the classroom. This chapter
    provides the thematic additions.

    Additional information

    Materials Language of the senses
  • Majid, A., Manko, P., & de Valk, J. (2017). Taal der Zintuigen. In S. Dekker, & J. Van Baren-Nawrocka (Eds.), Wetenschappelijke doorbraken de klas in! Molecuulbotsingen, Stress en Taal der Zintuigen (pp. 128-166). Nijmegen: Wetenschapsknooppunt Radboud Universiteit.

    Abstract

    Taal der zintuigen gaat over de invloed van taal en cultuur op zintuiglijke waarnemingen. Hoe omschrijf je wat je ziet, voelt, proeft of ruikt? In sommige culturen zijn er veel verschillende woorden voor kleur, in andere culturen juist weer heel weinig. Worden we geboren met deze verschillende kleurgroepen? En bepaalt hoe je ergens over praat ook wat je waarneemt?
  • Majid, A., Speed, L., Croijmans, I., & Arshamian, A. (2017). What makes a better smeller? Perception, 46, 406-430. doi:10.1177/0301006616688224.

    Abstract

    Olfaction is often viewed as difficult, yet the empirical evidence suggests a different picture. A closer look shows people around the world differ in their ability to detect, discriminate, and name odors. This gives rise to the question of what influences our ability to smell. Instead of focusing on olfactory deficiencies, this review presents a positive perspective by focusing on factors that make someone a better smeller. We consider three driving forces in improving olfactory ability: one’s biological makeup, one’s experience, and the environment. For each factor, we consider aspects proposed to improve odor perception and critically examine the evidence; as well as introducing lesser discussed areas. In terms of biology, there are cases of neurodiversity, such as olfactory synesthesia, that serve to enhance olfactory ability. Our lifetime experience, be it typical development or unique training experience, can also modify the trajectory of olfaction. Finally, our odor environment, in terms of ambient odor or culinary traditions, can influence odor perception too. Rather than highlighting the weaknesses of olfaction, we emphasize routes to harnessing our olfactory potential.
  • Mak, M., Faber, M., & Willems, R. M. (2023). Different kinds of simulation during literary reading: Insights from a combined fMRI and eye-tracking study. Cortex, 162, 115-135. doi:10.1016/j.cortex.2023.01.014.

    Abstract

    Mental simulation is an important aspect of narrative reading. In a previous study, we found that gaze durations are differentially impacted by different kinds of mental simulation. Motor simulation, perceptual simulation, and mentalizing as elicited by literary short stories influenced eye movements in distinguishable ways (Mak & Willems, 2019). In the current study, we investigated the existence of a common neural locus for these different kinds of simulation. We additionally investigated whether individual differences during reading, as indexed by the eye movements, are reflected in domain-specific activations in the brain. We found a variety of brain areas activated by simulation-eliciting content, both modality-specific brain areas and a general simulation area. Individual variation in percent signal change in activated areas was related to measures of story appreciation as well as personal characteristics (i.e., transportability, perspective taking). Taken together, these findings suggest that mental simulation is supported by both domain-specific processes grounded in previous experiences, and by the neural mechanisms that underlie higher-order language processing (e.g., situation model building, event indexing, integration).

    Additional information

    figures localizer tasks appendix C1
  • Mamus, E., Speed, L. J., Rissman, L., Majid, A., & Özyürek, A. (2023). Lack of visual experience affects multimodal language production: Evidence from congenitally blind and sighted people. Cognitive Science, 47(1): e13228. doi:10.1111/cogs.13228.

    Abstract

    The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
  • Mamus, E., Speed, L., Özyürek, A., & Majid, A. (2023). The effect of input sensory modality on the multimodal encoding of motion events. Language, Cognition and Neuroscience, 38(5), 711-723. doi:10.1080/23273798.2022.2141282.

    Abstract

    Each sensory modality has different affordances: vision has higher spatial acuity than audition, whereas audition has better temporal acuity. This may have consequences for the encoding of events and its subsequent multimodal language production—an issue that has received relatively little attention to date. In this study, we compared motion events presented as audio-only, visual-only, or multimodal (visual + audio) input and measured speech and co-speech gesture depicting path and manner of motion in Turkish. Input modality affected speech production. Speakers with audio-only input produced more path descriptions and fewer manner descriptions in speech compared to speakers who received visual input. In contrast, the type and frequency of gestures did not change across conditions. Path-only gestures dominated throughout. Our results suggest that while speech is more susceptible to auditory vs. visual input in encoding aspects of motion events, gesture is less sensitive to such differences.

    Additional information

    Supplemental material
  • Manhardt, F., Brouwer, S., Van Wijk, E., & Özyürek, A. (2023). Word order preference in sign influences speech in hearing bimodal bilinguals but not vice versa: Evidence from behavior and eye-gaze. Bilingualism: Language and Cognition, 26(1), 48-61. doi:10.1017/S1366728922000311.

    Abstract

    We investigated cross-modal influences between speech and sign in hearing bimodal bilinguals, proficient in a spoken and a sign language, and its consequences on visual attention during message preparation using eye-tracking. We focused on spatial expressions in which sign languages, unlike spoken languages, have a modality-driven preference to mention grounds (big objects) prior to figures (smaller objects). We compared hearing bimodal bilinguals’ spatial expressions and visual attention in Dutch and Dutch Sign Language (N = 18) to those of their hearing non-signing (N = 20) and deaf signing peers (N = 18). In speech, hearing bimodal bilinguals expressed more ground-first descriptions and fixated grounds more than hearing non-signers, showing influence from sign. In sign, they used as many ground-first descriptions as deaf signers and fixated grounds equally often, demonstrating no influence from speech. Cross-linguistic influence of word order preference and visual attention in hearing bimodal bilinguals appears to be one-directional modulated by modality-driven differences.
  • Manrique, E. (2017). Achieving mutual understanding in Argentine Sign Language (LSA). PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Mansbridge, M. P., Tamaoka, K., Xiong, K., & Verdonschot, R. G. (2017). Ambiguity in the processing of Mandarin Chinese relative clauses: One factor cannot explain it all. PLoS One, 12(6): e0178369. doi:10.1371/journal.pone.0178369.

    Abstract

    This study addresses the question of whether native Mandarin Chinese speakers process and comprehend subject-extracted relative clauses (SRC) more readily than objectextracted relative clauses (ORC) in Mandarin Chinese. Presently, this has been a hotly debated issue, with various studies producing contrasting results. Using two eye-tracking experiments with ambiguous and unambiguous RCs, this study shows that both ORCs and SRCs have different processing requirements depending on the locus and time course during reading. The results reveal that ORC reading was possibly facilitated by linear/ temporal integration and canonicity. On the other hand, similarity-based interference made ORCs more difficult, and expectation-based processing was more prominent for unambiguous ORCs. Overall, RC processing in Mandarin should not be broken down to a single ORC (dis) advantage, but understood as multiple interdependent factors influencing whether ORCs are either more difficult or easier to parse depending on the task and context at hand.
  • Martin, A. E., & Doumas, L. A. A. (2017). A mechanism for the cortical computation of hierarchical linguistic structure. PLoS Biology, 15(3): e2000663. doi:10.1371/journal.pbio.2000663.

    Abstract

    Biological systems often detect species-specific signals in the environment. In humans, speech and language are species-specific signals of fundamental biological importance. To detect the linguistic signal, human brains must form hierarchical representations from a sequence of perceptual inputs distributed in time. What mechanism underlies this ability? One hypothesis is that the brain repurposed an available neurobiological mechanism when hierarchical linguistic representation became an efficient solution to a computational problem posed to the organism. Under such an account, a single mechanism must have the capacity to perform multiple, functionally related computations, e.g., detect the linguistic signal and perform other cognitive functions, while, ideally, oscillating like the human brain. We show that a computational model of analogy, built for an entirely different purpose—learning relational reasoning—processes sentences, represents their meaning, and, crucially, exhibits oscillatory activation patterns resembling cortical signals elicited by the same stimuli. Such redundancy in the cortical and machine signals is indicative of formal and mechanistic alignment between representational structure building and “cortical” oscillations. By inductive inference, this synergy suggests that the cortical signal reflects structure generation, just as the machine signal does. A single mechanism—using time to encode information across a layered network—generates the kind of (de)compositional representational hierarchy that is crucial for human language and offers a mechanistic linking hypothesis between linguistic representation and cortical computation
  • Martin, A. E., Huettig, F., & Nieuwland, M. S. (2017). Can structural priming answer the important questions about language? A commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e304. doi:10.1017/S0140525X17000528.

    Abstract

    While structural priming makes a valuable contribution to psycholinguistics, it does not allow direct observation of representation, nor escape “source ambiguity.” Structural priming taps into implicit memory representations and processes that may differ from what is used online. We question whether implicit memory for language can and should be equated with linguistic representation or with language processing.
  • Martin, A. E., Monahan, P. J., & Samuel, A. G. (2017). Prediction of agreement and phonetic overlap shape sublexical identification. Language and Speech, 60(3), 356-376. doi:10.1177/0023830916650714.

    Abstract

    The mapping between the physical speech signal and our internal representations is rarely straightforward. When faced with uncertainty, higher-order information is used to parse the signal and because of this, the lexicon and some aspects of sentential context have been shown to modulate the identification of ambiguous phonetic segments. Here, using a phoneme identification task (i.e., participants judged whether they heard [o] or [a] at the end of an adjective in a noun–adjective sequence), we asked whether grammatical gender cues influence phonetic identification and if this influence is shaped by the phonetic properties of the agreeing elements. In three experiments, we show that phrase-level gender agreement in Spanish affects the identification of ambiguous adjective-final vowels. Moreover, this effect is strongest when the phonetic characteristics of the element triggering agreement and the phonetic form of the agreeing element are identical. Our data are consistent with models wherein listeners generate specific predictions based on the interplay of underlying morphosyntactic knowledge and surface phonetic cues.
  • Maskalenka, K., Alagöz, G., Krueger, F., Wright, J., Rostovskaya, M., Nakhuda, A., Bendall, A., Krueger, C., Walker, S., Scally, A., & Rugg-Gunn, P. J. (2023). NANOGP1, a tandem duplicate of NANOG, exhibits partial functional conservation in human naïve pluripotent stem cells. Development, 150(2): dev201155. doi:10.1242/dev.201155.

    Abstract

    Gene duplication events can drive evolution by providing genetic material for new gene functions, and they create opportunities for diverse developmental strategies to emerge between species. To study the contribution of duplicated genes to human early development, we examined the evolution and function of NANOGP1, a tandem duplicate of the transcription factor NANOG. We found that NANOGP1 and NANOG have overlapping but distinct expression profiles, with high NANOGP1 expression restricted to early epiblast cells and naïve-state pluripotent stem cells. Sequence analysis and epitope-tagging revealed that NANOGP1 is protein coding with an intact homeobox domain. The duplication that created NANOGP1 occurred earlier in primate evolution than previously thought and has been retained only in great apes, whereas Old World monkeys have disabled the gene in different ways, including homeodomain point mutations. NANOGP1 is a strong inducer of naïve pluripotency; however, unlike NANOG, it is not required to maintain the undifferentiated status of human naïve pluripotent cells. By retaining expression, sequence and partial functional conservation with its ancestral copy, NANOGP1 exemplifies how gene duplication and subfunctionalisation can contribute to transcription factor activity in human pluripotency and development.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). Whether long-term tracking of speech rate affects perception depends on who is talking. In Proceedings of Interspeech 2017 (pp. 586-590). doi:10.21437/Interspeech.2017-1517.

    Abstract

    Speech rate is known to modulate perception of temporally ambiguous speech sounds. For instance, a vowel may be perceived as short when the immediate speech context is slow, but as long when the context is fast. Yet, effects of long-term tracking of speech rate are largely unexplored. Two experiments tested whether long-term tracking of rate influences perception of the temporal Dutch vowel contrast /ɑ/-/a:/. In Experiment 1, one low-rate group listened to 'neutral' rate speech from talker A and to slow speech from talker B. Another high-rate group was exposed to the same neutral speech from A, but to fast speech from B. Between-group comparison of the 'neutral' trials revealed that the low-rate group reported a higher proportion of /a:/ in A's 'neutral' speech, indicating that A sounded faster when B was slow. Experiment 2 tested whether one's own speech rate also contributes to effects of long-term tracking of rate. Here, talker B's speech was replaced by playback of participants' own fast or slow speech. No evidence was found that one's own voice affected perception of talker A in larger speech contexts. These results carry implications for our understanding of the mechanisms involved in rate-dependent speech perception and of dialogue.
  • Massaro, D. W., & Perlman, M. (2017). Quantifying iconicity’s contribution during language acquisition: Implications for vocabulary learning. Frontiers in Communication, 2: 4. doi:10.3389/fcomm.2017.00004.

    Abstract

    Previous research found that iconicity—the motivated correspondence between word form and meaning—contributes to expressive vocabulary acquisition. We present two new experiments with two different databases and with novel analyses to give a detailed quantification of how iconicity contributes to vocabulary acquisition across development, including both receptive understanding and production. The results demonstrate that iconicity is more prevalent early in acquisition and diminishes with increasing age and with increasing vocabulary. In the first experiment, we found that the influence of iconicity on children’s production vocabulary decreased gradually with increasing age. These effects were independent of the observed influence of concreteness, difficulty of articulation, and parental input frequency. Importantly, we substantiated the independence of iconicity, concreteness, and systematicity—a statistical regularity between sounds and meanings. In the second experiment, we found that the average iconicity of both a child’s receptive vocabulary and expressive vocabulary diminished dramatically with increases in vocabulary size. These results indicate that iconic words tend to be learned early in the acquisition of both receptive vocabulary and expressive vocabulary. We recommend that iconicity be included as one of the many different influences on a child’s early vocabulary acquisition. Facing the logically insurmountable challenge to link the form of a novel word (e.g., “gavagai”) with its particular meaning (e.g., “rabbit”; Quine, 1960, 1990/1992), children manage to learn words with incredible ease. Interest in this process has permeated empirical and theoretical research in developmental psychology, psycholinguistics, and language studies more generally. Investigators have studied which words are learned and when they are learned (Fenson et al., 1994), biases in word learning (Markman, 1990, 1991); the perceptual, social, and linguistic properties of the words (Gentner, 1982; Waxman, 1999; Maguire et al., 2006; Vosoughi et al., 2010), the structure of the language being learned (Gentner and Boroditsky, 2001), and the influence of the child’s milieu on word learning (Hart and Risley, 1995; Roy et al., 2015). A growing number of studies also show that the iconicity of words might be a significant factor in word learning (Imai and Kita, 2014; Perniss and Vigliocco, 2014; Perry et al., 2015). Iconicity refers generally to a correspondence between the form of a signal (e.g., spoken word, sign, and written character) and its meaning. For example, the sign for tree is iconic in many signed languages: it resembles a branching tree waving above the ground in American Sign Language, outlines the shape of a tree in Danish Sign Language and forms a tree trunk in Chinese Sign Language. In contrast to signed languages, the words of spoken languages have traditionally been treated as arbitrary, with the assumption that the forms of most words bear no resemblance to their meaning (e.g., Hockett, 1960; Pinker and Bloom, 1990). However, there is now a large body of research showing that iconicity is prevalent in the lexicons of many spoken languages (Nuckolls, 1999; Dingemanse et al., 2015). Most languages have an inventory of iconic words for sounds—onomatopoeic words such as splash, slurp, and moo, which sound somewhat like the sound of the real-world event to which they refer. Rhodes (1994), for example, counts more than 100 of these words in English. Many languages also contain large inventories of ideophones—a distinctively iconic class of words that is used to express a variety of sensorimotor-rich meanings (Nuckolls, 1999; Voeltz and Kilian-Hatz, 2001; Dingemanse, 2012). For example, in Japanese, the word “koron”—with a voiceless [k] refers to a light object rolling once, the reduplicated “korokoro” to a light object rolling repeatedly, and “gorogoro”—with a voiced [g]—to a heavy object rolling repeatedly (Imai and Kita, 2014). And in Siwu, spoken in Ghana, ideophones include words like fwεfwε “springy, elastic” and saaa “cool sensation” (Dingemanse et al., 2015). Outside of onomatopoeia and ideophones, there is also evidence that adjectives and verbs—which also tend to convey sensorimotor imagery—are also relatively iconic (Nygaard et al., 2009; Perry et al., 2015). Another domain of iconic words involves some correspondence between the point of articulation of a word and its meaning. For example, there appears to be some prevalence across languages of nasal consonants in words for nose and bilabial consonants in words for lip (Urban, 2011). Spoken words can also have a correspondence between a word’s meaning and other aspects of its pronunciation. The word teeny, meaning small, is pronounced with a relatively small vocal tract, with high front vowels characterized by retracted lips and a high-frequency second formant (Ohala, 1994). Thus, teeny can be recognized as iconic of “small” (compared to the larger vocal tract configuration of the back, rounded vowel in huge), a pattern that is documented in the lexicons of a diversity of languages (Ultan, 1978; Blasi et al., 2016). Lewis and Frank (2016) have studied a more abstract form of iconicity that more meaningfully complex words tend to be longer. An evaluation of many diverse languages revealed that conceptually more complex meanings tend to have longer spoken forms. In their study, participants tended to assign a relatively long novel word to a conceptually more complex referent. Understanding that more complex meaning is usually represented by a longer word could aid a child’s parsing of a stream of spoken language and thus facilitate word learning. Some developmental psychologists have theorized that iconicity helps young children learn words by “bootstrapping” or “bridging” the association between a symbol and its referent (Imai and Kita, 2014; Perniss and Vigliocco, 2014). According to this idea, children begin to master word learning with the aid of iconic cues, which help to profile the connection between the form of a word and its meaning out in the world. The learning of verbs in particular may benefit from iconicity, as the referents of verbs are more abstract and challenging for young children to identify (Gentner, 1982; Snedeker and Gleitman, 2004). By helping children gain a firmer grasp of the concept of a symbol, iconicity might set the stage for the ensuing word-learning spurt of non-iconic words. The hypothesis that iconicity plays a role in word learning is supported by experimental studies showing that young children are better at learning words—especially verbs—when they are iconic (Imai et al., 2008; Kantartzis et al., 2011; Yoshida, 2012). In one study, for example, 3-year-old Japanese children were taught a set of novel verbs for actions. Some of the words the children learned were iconic (“sound-symbolic”), created on the basis of iconic patterns found in Japanese mimetics (e.g., the novel word nosunosu for a slow manner of walking; Imai et al., 2008). The results showed that children were better able to generalize action words across agents when the verb was iconic of the action compared to when it was not. A subsequent study also using novel verbs based on Japanese mimetics replicated the finding with 3-year-old English-speaking children (Kantartzis et al., 2011). However, it remains to be determined whether children trained in an iconic condition can generalize their learning to a non-iconic condition that would not otherwise be learned. Children as young as 14 months of age have been shown to benefit from iconicity in word learning (Imai et al., 2015). These children were better at learning novel words for spikey and rounded shapes when the words were iconic, corresponding to kiki and bouba sound symbolism (e.g., Köhler, 1947; Ramachandran and Hubbard, 2001). If iconic words are indeed easier to learn, there should be a preponderance of iconic words early in the learning of natural languages. There is evidence that this is the case in signed languages, which are widely recognized to contain a prevalence of iconic signs [Klima and Bellugi, 1979; e.g., as evident in Signing Savvy (2016)]. Although the role of iconicity in sign acquisition has been disputed [e.g., Orlansky and Bonvillian, 1984; see Thompson (2011) for discussion], the most thorough study to date found that signs of British Sign Language (BSL) that were learned earlier by children tended to be more iconic (Thompson et al., 2012). Thompson et al.’s measure of the age of acquisition of signs came from parental reports from a version of the MacArthur-Bates Communicative Development Inventory (MCDI; Fenson et al., 1994) adapted for BSL (Woolfe et al., 2010). The iconicity of signs was taken from norms based on BSL signers’ judgments using a scale of 1 (not at all iconic) to 7 [highly iconic; see Vinson et al. (2008), for norming details and BSL videos]. Thompson et al. (2012) found a positive correlation between iconicity judgments and words understood and produced. This relationship held up even after controlling for the contribution of imageability and familiarity. Surprisingly, however, there was a significantly stronger correlation for older children (21- to 30-month olds) than for younger children (age 11- to 20-month olds). Thompson et al. suggested that the larger role for iconicity for the older children may result from their increasing cognitive abilities or their greater experience in understanding meaningful form-meaning mappings. However, this suggestion does not fit with the expectation that iconicity should play a larger role earlier in language use. Thus, although supporting a role for iconicity in word learning, the larger influence for older children is inconsistent with the bootstrapping hypothesis, in which iconicity should play a larger role earlier in vocabulary learning (Imai and Kita, 2014; Perniss and Vigliocco, 2014). There is also evidence in spoken languages that earlier learned words tend to be more iconic. Perry et al. (2015) collected iconicity ratings on the roughly 600 English and Spanish words that are learned earliest by children, selected from their respective MCDIs. Native speakers on Amazon Mechanical Turk rated the iconicity of the words on a scale from −5 to 5, where 5 indicated that a word was highly iconic, −5 that it sounded like the opposite of its meaning, and 0 that it was completely arbitrary. Their instructions to raters are given in the Appendix because the same instructions were used for acquiring our iconicity ratings. The Perry et al. (2015) results showed that the likelihood of a word in children’s production vocabulary in both English and Spanish at 30 months was positively correlated with the iconicity ratings, even when several other possible contributing factors were partialed out, including log word frequency, concreteness, and word length. The pattern in Spanish held for two collections of iconicity ratings, one with the verbs of the 600-word set presented in infinitive form, and one with the verbs conjugated in the third person singular form. In English, the correlation between age of acquisition and iconicity held when the ratings were collected for words presented in written form only and in written form plus a spoken recording. It also held for ratings based on a more implicit measure of iconicity in which participants rated how accurately a space alien could guess the meaning of the word based on its sound alone. The pattern in English also held when Perry et al. (2015) factored out the systematicity of words [taken from Monaghan et al. (2014)]. Systematicity is measured as a correlation between form similarity and meaning similarity—that is, the degree to which words with similar meanings have similar forms. Monaghan et al. computed systematicity for a large number of English words and found a negative correlation with the age of acquisition of the word from 2 to 13+ years of age—more systematic words are learned earlier. Monaghan et al. (2014) and Christiansen and Chater (2016) observe that consistent sound-meaning patterns may facilitate early vocabulary acquisition, but the child would soon have to master arbitrary relationships necessitated by increases in vocabulary size. In theory, systematicity, sometimes called “relative iconicity,” is independent of iconicity. For example, the English cluster gl– occurs systematically in several words related to “vision” and “light,” such as glitter, glimmer, and glisten (Bergen, 2004), but the segments bear no obvious resemblance to this meaning. Monaghan et al. (2014) question whether spoken languages afford sufficient degrees of articulatory freedom for words to be iconic but not systematic. As evidence, they give the example of onomatopoeic words for the calls of small animals (e.g., peep and cheep) versus calls of big animals (roar and grrr), which would systematically reflect the size of the animal. Although Perry et al. (2015) found a positive effect of iconicity at 30 months, they did not evaluate its influence across the first years of a child’s life. To address this question, we conduct a more detailed examination of the time course of iconicity in word learning across the first 4 years of expressive vocabulary acquisition. In addition, we examine the role of iconicity in the acquisition of receptive vocabulary as well as productive vocabulary. There is some evidence that although receptive vocabulary and productive vocabulary are correlated with one another, a variable might not have equivalent influences on these two expressions of vocabulary. Massaro and Rowe (2015), for example, showed that difficulty of articulation had a strong effect on word production but not word comprehension. Thus, it is possible that the influence of iconicity on vocabulary development differs between production and comprehension. In particular, a larger influence on comprehension might follow from the emphasis of the bootstrapping hypothesis on iconicity serving to perceptually cue children to the connection between the sound of a word and its meaning
  • Mazzini, S., Holler, J., & Drijvers, L. (2023). Studying naturalistic human communication using dual-EEG and audio-visual recordings. STAR Protocols, 4(3): 102370. doi:10.1016/j.xpro.2023.102370.

    Abstract

    We present a protocol to study naturalistic human communication using dual-EEG and audio-visual recordings. We describe preparatory steps for data collection including setup preparation, experiment design, and piloting. We then describe the data collection process in detail which consists of participant recruitment, experiment room preparation, and data collection. We also outline the kinds of research questions that can be addressed with the current protocol, including several analysis possibilities, from conversational to advanced time-frequency analyses.
    For complete details on the use and execution of this protocol, please refer to Drijvers and Holler (2022).
  • McConnell, K. (2023). Individual Differences in Holistic and Compositional Language Processing. Journal of Cognition, 6. doi:10.5334/joc.283.

    Abstract

    Individual differences in cognitive abilities are ubiquitous across the spectrum of proficient language users. Although speakers differ with regard to their memory capacity, ability for inhibiting distraction, and ability to shift between different processing levels, comprehension is generally successful. However, this does not mean it is identical across individuals; listeners and readers may rely on different processing strategies to exploit distributional information in the service of efficient understanding. In the following psycholinguistic reading experiment, we investigate potential sources of individual differences in the processing of co-occurring words. Participants read modifier-noun bigrams like absolute silence in a self-paced reading task. Backward transition probability (BTP) between the two lexemes was used to quantify the prominence of the bigram as a whole in comparison to the frequency of its parts. Of five individual difference measures (processing speed, verbal working memory, cognitive inhibition, global-local scope shifting, and personality), two proved to be significantly associated with the effect of BTP on reading times. Participants who could inhibit a distracting global environment in order to more efficiently retrieve a single part and those that preferred the local level in the shifting task showed greater effects of the co-occurrence probability of the parts. We conclude that some participants are more likely to retrieve bigrams via their parts and their co-occurrence statistics whereas others more readily retrieve the two words together as a single chunked unit.
  • McLaughlin, R. L., Schijven, D., Van Rheenen, W., Van Eijk, K. R., O’Brien, M., Project MinE GWAS Consortium, Schizophrenia Working Group of the Psychiatric Genomics Consortium, Kahn, R. S., Ophoff, R. A., Goris, A., Bradley, D. G., Al-Chalabi, A., van den Berg, L. H., Luykx, J. J., Hardiman, O., & Veldink, J. H. (2017). Genetic correlation between amyotrophic lateral sclerosis and schizophrenia. Nature Communications, 8: 14774. doi:10.1038/ncomms14774.

    Abstract

    We have previously shown higher-than-expected rates of schizophrenia in relatives of patients with amyotrophic lateral sclerosis (ALS), suggesting an aetiological relationship between the diseases. Here, we investigate the genetic relationship between ALS and schizophrenia using genome-wide association study data from over 100,000 unique individuals. Using linkage disequilibrium score regression, we estimate the genetic correlation between ALS and schizophrenia to be 14.3% (7.05–21.6; P=1 × 10−4) with schizophrenia polygenic risk scores explaining up to 0.12% of the variance in ALS (P=8.4 × 10−7). A modest increase in comorbidity of ALS and schizophrenia is expected given these findings (odds ratio 1.08–1.26) but this would require very large studies to observe epidemiologically. We identify five potential novel ALS-associated loci using conditional false discovery rate analysis. It is likely that shared neurobiological mechanisms between these two disorders will engender novel hypotheses in future preclinical and clinical studies.
  • McLean, B., Dunn, M., & Dingemanse, M. (2023). Two measures are better than one: Combining iconicity ratings and guessing experiments for a more nuanced picture of iconicity in the lexicon. Language and Cognition, 15(4), 719-739. doi:10.1017/langcog.2023.9.

    Abstract

    Iconicity in language is receiving increased attention from many fields, but our understanding of iconicity is only as good as the measures we use to quantify it. We collected iconicity measures for 304 Japanese words from English-speaking participants, using rating and guessing tasks. The words included ideophones (structurally marked depictive words) along with regular lexical items from similar semantic domains (e.g., fuwafuwa ‘fluffy’, jawarakai ‘soft’). The two measures correlated, speaking to their validity. However, ideophones received consistently higher iconicity ratings than other items, even when guessed at the same accuracies, suggesting the rating task is more sensitive to cues like structural markedness that frame words as iconic. These cues did not always guide participants to the meanings of ideophones in the guessing task, but they did make them more confident in their guesses, even when they were wrong. Consistently poor guessing results reflect the role different experiences play in shaping construals of iconicity. Using multiple measures in tandem allows us to explore the interplay between iconicity and these external factors. To facilitate this, we introduce a reproducible workflow for creating rating and guessing tasks from standardised wordlists, while also making improvements to the robustness, sensitivity and discriminability of previous approaches.
  • McQueen, J. M., Jesse, A., & Mitterer, H. (2023). Lexically mediated compensation for coarticulation still as elusive as a white christmash. Cognitive Science: a multidisciplinary journal, 47(9): e13342. doi:10.1111/cogs.13342.

    Abstract

    Luthra, Peraza-Santiago, Beeson, Saltzman, Crinnion, and Magnuson (2021) present data from the lexically mediated compensation for coarticulation paradigm that they claim provides conclusive evidence in favor of top-down processing in speech perception. We argue here that this evidence does not support that conclusion. The findings are open to alternative explanations, and we give data in support of one of them (that there is an acoustic confound in the materials). Lexically mediated compensation for coarticulation thus remains elusive, while prior data from the paradigm instead challenge the idea that there is top-down processing in online speech recognition.

    Additional information

    supplementary materials
  • Menks, W. M., Furger, R., Lenz, C., Fehlbaum, L. V., Stadler, C., & Raschle, N. M. (2017). Microstructural white matter alterations in the corpus callosum of girls with conduct disorder. Journal of the American Academy of Child & Adolescent Psychiatry, 56, 258-265. doi:10.1016/j.jaac.2016.12.006.

    Abstract

    Objective

    Diffusion tensor imaging (DTI) studies in adolescent conduct disorder (CD) have demonstrated white matter alterations of tracts connecting functionally distinct fronto-limbic regions, but only in boys or mixed-gender samples. So far, no study has investigated white matter integrity in girls with CD on a whole-brain level. Therefore, our aim was to investigate white matter alterations in adolescent girls with CD.
    Method

    We collected high-resolution DTI data from 24 girls with CD and 20 typically developing control girls using a 3T magnetic resonance imaging system. Fractional anisotropy (FA) and mean diffusivity (MD) were analyzed for whole-brain as well as a priori−defined regions of interest, while controlling for age and intelligence, using a voxel-based analysis and an age-appropriate customized template.
    Results

    Whole-brain findings revealed white matter alterations (i.e., increased FA) in girls with CD bilaterally within the body of the corpus callosum, expanding toward the right cingulum and left corona radiata. The FA and MD results in a priori−defined regions of interest were more widespread and included changes in the cingulum, corona radiata, fornix, and uncinate fasciculus. These results were not driven by age, intelligence, or attention-deficit/hyperactivity disorder comorbidity.
    Conclusion

    This report provides the first evidence of white matter alterations in female adolescents with CD as indicated through white matter reductions in callosal tracts. This finding enhances current knowledge about the neuropathological basis of female CD. An increased understanding of gender-specific neuronal characteristics in CD may influence diagnosis, early detection, and successful intervention strategies.
  • Meyer, A. S., & Gerakaki, S. (2017). The art of conversation: Why it’s harder than you might think. Contact Magazine, 43(2), 11-15. Retrieved from http://contact.teslontario.org/the-art-of-conversation-why-its-harder-than-you-might-think/.
  • Meyer, A. S. (2017). Structural priming is not a Royal Road to representations. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e305. doi:10.1017/S0140525X1700053X.

    Abstract

    Branigan & Pickering (B&P) propose that the structural priming paradigm is a Royal Road to linguistic representations of any kind, unobstructed by in fl uences of psychological processes. In my view, however, they are too optimistic about the versatility of the paradigm and, more importantly, its ability to provide direct evidence about the nature of stored linguistic representations.
  • Meyer, A. S. (2023). Timing in conversation. Journal of Cognition, 6(1), 1-17. doi:10.5334/joc.268.

    Abstract

    Turn-taking in everyday conversation is fast, with median latencies in corpora of conversational speech often reported to be under 300 ms. This seems like magic, given that experimental research on speech planning has shown that speakers need much more time to plan and produce even the shortest of utterances. This paper reviews how language scientists have combined linguistic analyses of conversations and experimental work to understand the skill of swift turn-taking and proposes a tentative solution to the riddle of fast turn-taking.
  • Mickan, A., McQueen, J. M., Brehm, L., & Lemhöfer, K. (2023). Individual differences in foreign language attrition: A 6-month longitudinal investigation after a study abroad. Language, Cognition and Neuroscience, 38(1), 11-39. doi:10.1080/23273798.2022.2074479.

    Abstract

    While recent laboratory studies suggest that the use of competing languages is a driving force in foreign language (FL) attrition (i.e. forgetting), research on “real” attriters has failed to demonstrate
    such a relationship. We addressed this issue in a large-scale longitudinal study, following German students throughout a study abroad in Spain and their first six months back in Germany. Monthly,
    percentage-based frequency of use measures enabled a fine-grained description of language use.
    L3 Spanish forgetting rates were indeed predicted by the quantity and quality of Spanish use, and
    correlated negatively with L1 German and positively with L2 English letter fluency. Attrition rates
    were furthermore influenced by prior Spanish proficiency, but not by motivation to maintain
    Spanish or non-verbal long-term memory capacity. Overall, this study highlights the importance
    of language use for FL retention and sheds light on the complex interplay between language
    use and other determinants of attrition.
  • Mishra, C., Offrede, T., Fuchs, S., Mooshammer, C., & Skantze, G. (2023). Does a robot’s gaze aversion affect human gaze aversion? Frontiers in Robotics and AI, 10: 1127626. doi:10.3389/frobt.2023.1127626.

    Abstract

    Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot’s gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot’s lack of gaze aversion.
  • Mishra, C., Verdonschot, R. G., Hagoort, P., & Skantze, G. (2023). Real-time emotion generation in human-robot dialogue using large language models. Frontiers in Robotics and AI, 10: 1271610. doi:10.3389/frobt.2023.1271610.

    Abstract

    Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot’s turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot’s emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.
  • Moers, C., Meyer, A. S., & Janse, E. (2017). Effects of word frequency and transitional probability on word reading durations of younger and older speakers. Language and Speech, 60(2), 289-317. doi:10.1177/0023830916649215.

    Abstract

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups–younger children (8–12 years), adolescents (12–18 years) and older (62–95 years) Dutch speakers–show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.
  • Moers, C. (2017). The neighbors will tell you what to expect: Effects of aging and predictability on language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Moisik, S. R., & Dediu, D. (2017). Anatomical biasing and clicks: Evidence from biomechanical modeling. Journal of Language Evolution, 2(1), 37-51. doi:10.1093/jole/lzx004.

    Abstract

    It has been observed by several researchers that the Khoisan palate tends to lack a prominent alveolar ridge. A biomechanical model of click production was created to examine if these sounds might be subject to an anatomical bias associated with alveolar ridge size. Results suggest the bias is plausible, taking the form of decreased articulatory effort and improved volume change characteristics; however, further modeling and experimental research is required to solidify the claim.

    Additional information

    lzx004_Supp.zip
  • Moisik, S. R., & Gick, B. (2017). The quantal larynx: The stable regions of laryngeal biomechanics and implications for speech production. Journal of Speech, Language, and Hearing Research, 60, 540-560. doi:10.1044/2016_JSLHR-S-16-0019.

    Abstract

    Purpose: Recent proposals suggest that (a) the high dimensionality of speech motor control may be reduced via modular neuromuscular organization that takes advantage of intrinsic biomechanical regions of stability and (b) computational modeling provides a means to study whether and how such modularization works. In this study, the focus is on the larynx, a structure that is fundamental to speech production because of its role in phonation and numerous articulatory functions. Method: A 3-dimensional model of the larynx was created using the ArtiSynth platform (http://www.artisynth.org). This model was used to simulate laryngeal articulatory states, including inspiration, glottal fricative, modal prephonation, plain glottal stop, vocal–ventricular stop, and aryepiglotto– epiglottal stop and fricative. Results: Speech-relevant laryngeal biomechanics is rich with “quantal” or highly stable regions within muscle activation space. Conclusions: Quantal laryngeal biomechanics complement a modular view of speech control and have implications for the articulatory–biomechanical grounding of numerous phonetic and phonological phenomena
  • Monaghan, P. (2017). Canalization of language structure from environmental constraints: A computational model of word learning from multiple cues. Topics in Cognitive Science, 9(1), 21-34. doi:10.1111/tops.12239.

    Abstract

    There is substantial variation in language experience, yet there is surprising similarity in the language structure acquired. Constraints on language structure may be external modulators that result in this canalization of language structure, or else they may derive from the broader, communicative environment in which language is acquired. In this paper, the latter perspective is tested for its adequacy in explaining robustness of language learning to environmental variation. A computational model of word learning from cross‐situational, multimodal information was constructed and tested. Key to the model's robustness was the presence of multiple, individually unreliable information sources to support learning. This “degeneracy” in the language system has a detrimental effect on learning, compared to a noise‐free environment, but has a critically important effect on acquisition of a canalized system that is resistant to environmental noise in communication.
  • Monaghan, P., & Rowland, C. F. (2017). Combining language corpora with experimental and computational approaches for language acquisition research. Language Learning, 67(S1), 14-39. doi:10.1111/lang.12221.

    Abstract

    Historically, first language acquisition research was a painstaking process of observation, requiring the laborious hand coding of children's linguistic productions, followed by the generation of abstract theoretical proposals for how the developmental process unfolds. Recently, the ability to collect large-scale corpora of children's language exposure has revolutionized the field. New techniques enable more precise measurements of children's actual language input, and these corpora constrain computational and cognitive theories of language development, which can then generate predictions about learning behavior. We describe several instances where corpus, computational, and experimental work have been productively combined to uncover the first language acquisition process and the richness of multimodal properties of the environment, highlighting how these methods can be extended to address related issues in second language research. Finally, we outline some of the difficulties that can be encountered when applying multimethod approaches and show how these difficulties can be obviated
  • Monaghan, P., Brand, J., Frost, R. L. A., & Taylor, G. (2017). Multiple variable cues in the environment promote accurate and robust word learning. In G. Gunzelman, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 817-822). Retrieved from https://mindmodeling.org/cogsci2017/papers/0164/index.html.

    Abstract

    Learning how words refer to aspects of the environment is a complex task, but one that is supported by numerous cues within the environment which constrain the possibilities for matching words to their intended referents. In this paper we tested the predictions of a computational model of multiple cue integration for word learning, that predicted variation in the presence of cues provides an optimal learning situation. In a cross-situational learning task with adult participants, we varied the reliability of presence of distributional, prosodic, and gestural cues. We found that the best learning occurred when cues were often present, but not always. The effect of variability increased the salience of individual cues for the learner, but resulted in robust learning that was not vulnerable to individual cues’ presence or absence. Thus, variability of multiple cues in the language-learning environment provided the optimal circumstances for word learning.
  • Monaghan, P., Chang, Y.-N., Welbourne, S., & Brysbaert, M. (2017). Exploring the relations between word frequency, language exposure, and bilingualism in a computational model of reading. Journal of Memory and Language, 93, 1-27. doi:10.1016/j.jml.2016.08.003.

    Abstract

    Individuals show differences in the extent to which psycholinguistic variables predict their responses for lexical processing tasks. A key variable accounting for much variance in lexical processing is frequency, but the size of the frequency effect has been demonstrated to reduce as a consequence of the individual’s vocabulary size. Using a connectionist computational implementation of the triangle model on a large set of English words, where orthographic, phonological, and semantic representations interact during processing, we show that the model demonstrates a reduced frequency effect as a consequence of amount of exposure to the language, a variable that was also a cause of greater vocabulary size in the model. The model was also trained to learn a second language, Dutch, and replicated behavioural observations that increased proficiency in a second language resulted in reduced frequency effects for that language but increased frequency effects in the first language. The model provides a first step to demonstrating causal relations between psycholinguistic variables in a model of individual differences in lexical processing, and the effect of bilingualism on interacting variables within the language processing system
  • Monaghan, P., Donnelly, S., Alcock, K., Bidgood, A., Cain, K., Durrant, S., Frost, R. L. A., Jago, L. S., Peter, M. S., Pine, J. M., Turnbull, H., & Rowland, C. F. (2023). Learning to generalise but not segment an artificial language at 17 months predicts children’s language skills 3 years later. Cognitive Psychology, 147: 101607. doi:10.1016/j.cogpsych.2023.101607.

    Abstract

    We investigated whether learning an artificial language at 17 months was predictive of children’s natural language vocabulary and grammar skills at 54 months. Children at 17 months listened to an artificial language containing non-adjacent dependencies, and were then tested on their learning to segment and to generalise the structure of the language. At 54 months, children were then tested on a range of standardised natural language tasks that assessed receptive and expressive vocabulary and grammar. A structural equation model demonstrated that learning the artificial language generalisation at 17 months predicted language abilities – a composite of vocabulary and grammar skills – at 54 months, whereas artificial language segmentation at 17 months did not predict language abilities at this age. Artificial language learning tasks – especially those that probe grammar learning – provide a valuable tool for uncovering the mechanisms driving children’s early language development.

    Additional information

    supplementary data
  • Mongelli, V., Dehaene, S., Vinckier, F., Peretz, I., Bartolomeo, P., & Cohen, L. (2017). Music and words in the visual cortex: The impact of musical expertise. Cortex, 86, 260-274. doi:10.1016/j.cortex.2016.05.016.

    Abstract

    How does the human visual system accommodate expertise for two simultaneously acquired
    symbolic systems? We used fMRI to compare activations induced in the visual
    cortex by musical notation, written words and other classes of objects, in professional
    musicians and in musically naı¨ve controls. First, irrespective of expertise, selective activations
    for music were posterior and lateral to activations for words in the left occipitotemporal
    cortex. This indicates that symbols characterized by different visual features
    engage distinct cortical areas. Second, musical expertise increased the volume of activations
    for music and led to an anterolateral displacement of word-related activations. In
    musicians, there was also a dramatic increase of the brain-scale networks connected to the
    music-selective visual areas. Those findings reveal that acquiring a double visual expertise
    involves an expansion of category-selective areas, the development of novel long-distance
    functional connectivity, and possibly some competition between categories for the colonization
    of cortical space
  • Montero-Melis, G., & Bylund, E. (2017). Getting the ball rolling: the cross-linguistic conceptualization of caused motion. Language and Cognition, 9(3), 446–472. doi:10.1017/langcog.2016.22.

    Abstract

    Does the way we talk about events correspond to how we conceptualize them? Three experiments (N = 135) examined how Spanish and Swedish native speakers judge event similarity in the domain of caused motion (‘He rolled the tyre into the barn’). Spanish and Swedish motion descriptions regularly encode path (‘into’), but differ in how systematically they include manner information (‘roll’). We designed a similarity arrangement task which allowed participants to give varying weights to different dimensions when gauging event similarity. The three experiments progressively reduced the likelihood that speakers were using language to solve the task. We found that, as long as the use of language was possible (Experiments 1 and 2), Swedish speakers were more likely than Spanish speakers to base their similarity arrangements on object manner (rolling/sliding). However, when recruitment of language was hindered through verbal interference, cross-linguistic differences disappeared (Experiment 3). A compound analysis of all experiments further showed that (i) cross-linguistic differences were played out against a backdrop of commonly represented event components, and (ii) describing vs. not describing the events did not augment cross-linguistic differences, but instead had similar effects across languages. We interpret these findings as suggesting a dynamic role of language in event conceptualization.

Share this page