Publications

Displaying 101 - 200 of 1031
  • Bujok, R., Meyer, A. S., & Bosker, H. R. (2024). Audiovisual perception of lexical stress: Beat gestures and articulatory cues. Language and Speech. Advance online publication. doi:10.1177/00238309241258162.

    Abstract

    Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.
  • Bulut, T., Hung, Y., Tzeng, O., & Wu, D. (2017). Neural correlates of processing sentences and compound words in Chinese. PLOS ONE, 12(12): e0188526. doi:10.1371/journal.pone.0188526.
  • Bulut, T. (2023). Domain‐general and domain‐specific functional networks of Broca's area underlying language processing. Brain and Behavior, 13(7): e3046. doi:10.1002/brb3.3046.

    Abstract

    Introduction
    Despite abundant research on the role of Broca's area in language processing, there is still no consensus on language specificity of this region and its connectivity network.

    Methods
    The present study employed the meta-analytic connectivity modeling procedure to identify and compare domain-specific (language-specific) and domain-general (shared between language and other domains) functional connectivity patterns of three subdivisions within the broadly defined Broca's area: pars opercularis (IFGop), pars triangularis (IFGtri), and pars orbitalis (IFGorb) of the left inferior frontal gyrus.

    Results
    The findings revealed a left-lateralized frontotemporal network for all regions of interest underlying domain-specific linguistic functions. The domain-general network, however, spanned frontoparietal regions that overlap with the multiple-demand network and subcortical regions spanning the thalamus and the basal ganglia.

    Conclusions
    The findings suggest that language specificity of Broca's area emerges within a left-lateralized frontotemporal network, and that domain-general resources are garnered from frontoparietal and subcortical networks when required by task demands.

    Additional information

    Supporting Information Data availability
  • Bulut, T., & Hagoort, P. (2024). Contributions of the left and right thalami to language: A meta-analytic approach. Brain Structure & Function. Advance online publication. doi:10.1007/s00429-024-02795-3.

    Abstract

    Background: Despite a pervasive cortico-centric view in cognitive neuroscience, subcortical structures including the thalamus have been shown to be increasingly involved in higher cognitive functions. Previous structural and functional imaging studies demonstrated cortico-thalamo-cortical loops which may support various cognitive functions including language. However, large-scale functional connectivity of the thalamus during language tasks has not been examined before. Methods: The present study employed meta-analytic connectivity modeling to identify language-related coactivation patterns of the left and right thalami. The left and right thalami were used as regions of interest to search the BrainMap functional database for neuroimaging experiments with healthy participants reporting language-related activations in each region of interest. Activation likelihood estimation analyses were then carried out on the foci extracted from the identified studies to estimate functional convergence for each thalamus. A functional decoding analysis based on the same database was conducted to characterize thalamic contributions to different language functions. Results: The results revealed bilateral frontotemporal and bilateral subcortical (basal ganglia) coactivation patterns for both the left and right thalami, and also right cerebellar coactivations for the left thalamus, during language processing. In light of previous empirical studies and theoretical frameworks, the present connectivity and functional decoding findings suggest that cortico-subcortical-cerebellar-cortical loops modulate and fine-tune information transfer within the bilateral frontotemporal cortices during language processing, especially during production and semantic operations, but also other language (e.g., syntax, phonology) and cognitive operations (e.g., attention, cognitive control). Conclusion: The current findings show that the language-relevant network extends beyond the classical left perisylvian cortices and spans bilateral cortical, bilateral subcortical (bilateral thalamus, bilateral basal ganglia) and right cerebellar regions.

    Additional information

    supplementary information
  • Bulut, T., & Temiz, G. (2024). Cortical organization of action and object naming in Turkish: A transcranial magnetic stimulation study. Psikoloji Çalışmaları / Studies in Psychology, 44(2), 235-254. doi:10.26650/SP2023-1279982.

    Abstract

    It is controversial whether the linguistic distinction between nouns and verbs is reflected in the cortical organization of the lexicon. Neuropsychological studies of aphasia and neuroimaging studies have associated the left prefrontal cortex, particularly Broca’s area, with verbs/actions, and the left posterior temporal cortex, particularly Wernicke’s area, with nouns/objects. However, more recent research has revealed that evidence for this distinction is inconsistent. Against this background, the present study employed low-frequency repetitive transcranial magnetic stimulation (rTMS) to investigate the dissociation of action and object naming in Broca’s and Wernicke’s areas in Turkish. Thirty-six healthy adult participants took part in the study. In two experiments, low-frequency (1 Hz) inhibitory rTMS was administered at 100% of motor threshold for 10 minutes to suppress the activity of the left prefrontal cortex spanning Broca’s area or the left posterior temporal cortex spanning Wernicke’s area. A picture naming task involving objects and actions was employed before and after the stimulation sessions to examine any pre- to post-stimulation changes in naming latencies. Linear mixed models that included various psycholinguistic covariates including frequency, visual and conceptual complexity, age of acquisition, name agreement and word length were fitted to the data. The findings showed that conceptual complexity, age of acquisition of the target word and name agreement had a significant effect on naming latencies, which was consistent across both experiments. Critically, the findings significantly associated Broca’s area, but not Wernicke’s area, in the distinction between naming objects and actions. Suppression of Broca’s area led to a significant and robust increase in naming latencies (or slowdown) for objects and a marginally significant, but not robust, reduction in naming latencies (or speedup) for actions. The findings suggest that actions and objects in Turkish can be dissociated in Broca’s area.
  • Burchardt, L., Van de Sande, Y., Kehy, M., Gamba, M., Ravignani, A., & Pouw, W. (2024). A toolkit for the dynamic study of air sacs in siamang and other elastic circular structures. PLOS Computational Biology, 20(6): e1012222. doi:10.1371/journal.pcbi.1012222.

    Abstract

    Biological structures are defined by rigid elements, such as bones, and elastic elements, like muscles and membranes. Computer vision advances have enabled automatic tracking of moving animal skeletal poses. Such developments provide insights into complex time-varying dynamics of biological motion. Conversely, the elastic soft-tissues of organisms, like the nose of elephant seals, or the buccal sac of frogs, are poorly studied and no computer vision methods have been proposed. This leaves major gaps in different areas of biology. In primatology, most critically, the function of air sacs is widely debated; many open questions on the role of air sacs in the evolution of animal communication, including human speech, remain unanswered. To support the dynamic study of soft-tissue structures, we present a toolkit for the automated tracking of semi-circular elastic structures in biological video data. The toolkit contains unsupervised computer vision tools (using Hough transform) and supervised deep learning (by adapting DeepLabCut) methodology to track inflation of laryngeal air sacs or other biological spherical objects (e.g., gular cavities). Confirming the value of elastic kinematic analysis, we show that air sac inflation correlates with acoustic markers that likely inform about body size. Finally, we present a pre-processed audiovisual-kinematic dataset of 7+ hours of closeup audiovisual recordings of siamang (Symphalangus syndactylus) singing. This toolkit (https://github.com/WimPouw/AirSacTracker) aims to revitalize the study of non-skeletal morphological structures across multiple species.
  • Burenhult, N. (2004). Landscape terms and toponyms in Jahai: A field report. Lund Working Papers, 51, 17-29.
  • Burenhult, N., Hill, C., Huber, J., Van Putten, S., Rybka, K., & San Roque, L. (2017). Forests: The cross-linguistic perspective. Geographica Helvetica, 72(4), 455-464. doi:10.5194/gh-72-455-2017.

    Abstract

    Do all humans perceive, think, and talk about tree cover ("forests") in more or less the same way? International forestry programs frequently seem to operate on the assumption that they do. However, recent advances in the language sciences show that languages vary greatly as to how the landscape domain is lexicalized and grammaticalized. Different languages segment and label the large-scale environment and its features according to astonishingly different semantic principles, often in tandem with highly culture-specific practices and ideologies. Presumed basic concepts like mountain, valley, and river cannot in fact be straightforwardly translated across languages. In this paper we describe, compare, and evaluate some of the semantic diversity observed in relation to forests. We do so on the basis of first-hand linguistic field data from a global sample of indigenous categorization systems as they are manifested in the following languages: Avatime (Ghana), Duna (Papua New Guinea), Jahai (Malay Peninsula), Lokono (the Guianas), Makalero (East Timor), and Umpila/Kuuku Ya'u (Cape York Peninsula). We show that basic linguistic categories relating to tree cover vary considerably in their principles of semantic encoding across languages, and that forest is a challenging category from the point of view of intercultural translatability. This has consequences for current global policies and programs aimed at standardizing forest definitions and measurements. It calls for greater attention to categorial diversity in designing and implementing such agendas, and for receptiveness to and understanding of local indigenous classification systems in communicating those agendas on the ground.
  • Cabrelli, J., Chaouch-Orozco, A., González Alonso, J., Pereira Soares, S. M., Puig-Mayenco, E., & Rothman, J. (2023). Introduction - Multilingualism: Language, brain, and cognition. In J. Cabrelli, A. Chaouch-Orozco, J. González Alonso, S. M. Pereira Soares, E. Puig-Mayenco, & J. Rothman (Eds.), The Cambridge handbook of third language acquisition (pp. 1-20). Cambridge: Cambridge University Press. doi:10.1017/9781108957823.001.

    Abstract

    This chapter provides an introduction to the handbook. It succintly overviews the key questions in the field of L3/Ln acquisition and summarizes the scope of all the chapters included. The chapter ends by raising some outstanding questions that the field needs to address.
  • Callaghan, E., Holland, C., & Kessler, K. (2017). Age-Related Changes in the Ability to Switch between Temporal and Spatial Attention. Frontiers in Aging Neuroscience, 9: 28. doi:10.3389/fnagi.2017.00028.

    Abstract

    Background: Identifying age-related changes in cognition that contribute towards reduced driving performance is important for the development of interventions to improve older adults' driving and prolong the time that they can continue to drive. While driving, one is often required to switch from attending to events changing in time, to distribute attention spatially. Although there is extensive research into both spatial attention and temporal attention and how these change with age, the literature on switching between these modalities of attention is limited within any age group. Methods: Age groups (21-30, 40-49, 50-59, 60-69 and 70+ years) were compared on their ability to switch between detecting a target in a rapid serial visual presentation (RSVP) stream and detecting a target in a visual search display. To manipulate the cost of switching, the target in the RSVP stream was either the first item in the stream (Target 1st), towards the end of the stream (Target Mid), or absent from the stream (Distractor Only). Visual search response times and accuracy were recorded. Target 1st trials behaved as no-switch trials, as attending to the remaining stream was not necessary. Target Mid and Distractor Only trials behaved as switch trials, as attending to the stream to the end was required. Results: Visual search response times (RTs) were longer on "Target Mid" and "Distractor Only" trials in comparison to "Target 1st" trials, reflecting switch-costs. Larger switch-costs were found in both the 40-49 and 60-69 years group in comparison to the 21-30 years group when switching from the Target Mid condition. Discussion: Findings warrant further exploration as to whether there are age-related changes in the ability to switch between these modalities of attention while driving. If older adults display poor performance when switching between temporal and spatial attention while driving, then the development of an intervention to preserve and improve this ability would be beneficial. © 2017 Callaghan, Holland and Kessler.
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Carota, F., Kriegeskorte, N., Nili, H., & Pulvermüller, F. (2017). Representational Similarity Mapping of Distributional Semantics in Left Inferior Frontal, Middle Temporal, and Motor Cortex. Cerebral Cortex, 27(1), 294-309. doi:10.1093/cercor/bhw379.

    Abstract

    Language comprehension engages a distributed network of frontotemporal, parietal, and sensorimotor regions, but it is still unclear how meaning of words and their semantic relationships are represented and processed within these regions and to which degrees lexico-semantic representations differ between regions and semantic types. We used fMRI and representational similarity analysis to relate word-elicited multivoxel patterns to semantic similarity between action and object words. In left inferior frontal (BA 44-45-47), left posterior middle temporal and left precentral cortex, the similarity of brain response patterns reflected semantic similarity among action-related verbs, as well as across lexical classes-between action verbs and tool-related nouns and, to a degree, between action verbs and food nouns, but not between action verbs and animal nouns. Instead, posterior inferior temporal cortex exhibited a reverse response pattern, which reflected the semantic similarity among object-related nouns, but not action-related words. These results show that semantic similarity is encoded by a range of cortical areas, including multimodal association (e.g., anterior inferior frontal, posterior middle temporal) and modality-preferential (premotor) cortex and that the representational geometries in these regions are partly dependent on semantic type, with semantic similarity among action-related words crossing lexical-semantic category boundaries.
  • Carota, F., Nili, H., Kriegeskorte, N., & Pulvermüller, F. (2023). Experientially-grounded and distributional semantic vectors uncover dissociable representations of semantic categories. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2023.2232481.

    Abstract

    Neuronal populations code similar concepts by similar activity patterns across the human brain's semantic networks. However, it is unclear to what extent such meaning-to-symbol mapping reflects distributional statistics, or experiential information grounded in sensorimotor and emotional knowledge. We asked whether integrating distributional and experiential data better distinguished conceptual categories than each method taken separately. We examined the similarity structure of fMRI patterns elicited by visually presented action- and object-related words using representational similarity analysis (RSA). We found that the distributional and experiential/integrative models respectively mapped the high-dimensional semantic space in left inferior frontal, anterior temporal, and in left precentral, posterior inferior/middle temporal cortex. Furthermore, results from model comparisons uncovered category-specific similarity patterns, as both distributional and experiential models matched the similarity patterns for action concepts in left fronto-temporal cortex, whilst the experiential/integrative (but not distributional) models matched the similarity patterns for object concepts in left fusiform and angular gyrus.
  • Carota, F., Schoffelen, J.-M., Oostenveld, R., & Indefrey, P. (2023). Parallel or sequential? Decoding conceptual and phonological/phonetic information from MEG signals during language production. Cognitive Neuropsychology, 40(5-6), 298-317. doi:10.1080/02643294.2023.2283239.

    Abstract

    Speaking requires the temporally coordinated planning of core linguistic information, from conceptual meaning to articulation. Recent neurophysiological results suggested that these operations involve a cascade of neural events with subsequent onset times, whilst competing evidence suggests early parallel neural activation. To test these hypotheses, we examined the sources of neuromagnetic activity recorded from 34 participants overtly naming 134 images from 4 object categories (animals, tools, foods and clothes). Within each category, word length and phonological neighbourhood density were co-varied to target phonological/phonetic processes. Multivariate pattern analyses (MVPA) searchlights in source space decoded object categories in occipitotemporal and middle temporal cortex, and phonological/phonetic variables in left inferior frontal (BA 44) and motor cortex early on. The findings suggest early activation of multiple variables due to intercorrelated properties and interactivity of processing, thus raising important questions about the representational properties of target words during the preparatory time enabling overt speaking.
  • Carrion Castillo, A., Maassen, B., Franke, B., Heister, A., Naber, M., Van der Leij, A., Francks, C., & Fisher, S. E. (2017). Association analysis of dyslexia candidate genes in a Dutch longitudinal sample. European Journal of Human Genetics, 25(4), 452-460. doi:10.1038/ejhg.2016.194.

    Abstract

    Dyslexia is a common specific learning disability with a substantive genetic component. Several candidate genes have been proposed to be implicated in dyslexia susceptibility, such as DYX1C1, ROBO1, KIAA0319, and DCDC2. Associations with variants in these genes have also been reported with a variety of psychometric measures tapping into the underlying processes that might be impaired in dyslexic people. In this study, we first conducted a literature review to select single nucleotide polymorphisms (SNPs) in dyslexia candidate genes that had been repeatedly implicated across studies. We then assessed the SNPs for association in the richly phenotyped longitudinal data set from the Dutch Dyslexia Program. We tested for association with several quantitative traits, including word and nonword reading fluency, rapid naming, phoneme deletion, and nonword repetition. In this, we took advantage of the longitudinal nature of the sample to examine if associations were stable across four educational time-points (from 7 to 12 years). Two SNPs in the KIAA0319 gene were nominally associated with rapid naming, and these associations were stable across different ages. Genetic association analysis with complex cognitive traits can be enriched through the use of longitudinal information on trait development.
  • Casillas, M., & Frank, M. C. (2017). The development of children's ability to track and predict turn structure in conversation. Journal of Memory and Language, 92, 234-253. doi:10.1016/j.jml.2016.06.013.

    Abstract

    Children begin developing turn-taking skills in infancy but take several years to fluidly integrate their growing knowledge of language into their turn-taking behavior. In two eye-tracking experiments, we measured children’s anticipatory gaze to upcoming responders while controlling linguistic cues to turn structure. In Experiment 1, we showed English and non-English conversations to English-speaking adults and children. In Experiment 2, we phonetically controlled lexicosyntactic and prosodic cues in English-only speech. Children spontaneously made anticipatory gaze switches by age two and continued improving through age six. In both experiments, children and adults made more anticipatory switches after hearing questions. Consistent with prior findings on adult turn prediction, prosodic information alone did not increase children’s anticipatory gaze shifts. But, unlike prior work with adults, lexical information alone was not sucient either—children’s performance was best overall with lexicosyntax and prosody together. Our findings support an account in which turn tracking and turn prediction emerge in infancy and then gradually become integrated with children’s online linguistic processing.
  • Casillas, M., Foushee, R., Méndez Girón, J., Polian, G., & Brown, P. (2024). Little evidence for a noun bias in Tseltal spontaneous speech. First Language. Advance online publication. doi:10.1177/01427237231216571.

    Abstract

    This study examines whether children acquiring Tseltal (Mayan) demonstrate a noun bias – an overrepresentation of nouns in their early vocabularies. Nouns, specifically concrete and animate nouns, are argued to universally predominate in children’s early vocabularies because their referents are naturally available as bounded concepts to which linguistic labels can be mapped. This early advantage for noun learning has been documented using multiple methods and across a diverse collection of language populations. However, past evidence bearing on a noun bias in Tseltal learners has been mixed. Tseltal grammatical features and child–caregiver interactional patterns dampen the salience of nouns and heighten the salience of verbs, leading to the prediction of a diminished noun bias and perhaps even an early predominance of verbs. We here analyze the use of noun and verb stems in children’s spontaneous speech from egocentric daylong recordings of 29 Tseltal learners between 0;9 and 4;4. We find weak to no evidence for a noun bias using two separate analytical approaches on the same data; one analysis yields a preliminary suggestion of a flipped outcome (i.e. a verb bias). We discuss the implications of these findings for broader theories of learning bias in early lexical development.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Catani, M., Robertsson, N., Beyh, A., Huynh, V., de Santiago Requejo, F., Howells, H., Barrett, R. L., Aiello, M., Cavaliere, C., Dyrby, T. B., Krug, K., Ptito, M., D'Arceuil, H., Forkel, S. J., & Dell'Acqua, F. (2017). Short parietal lobe connections of the human and monkey brain. Cortex, 97, 339-357. doi:10.1016/j.cortex.2017.10.022.

    Abstract

    The parietal lobe has a unique place in the human brain. Anatomically, it is at the crossroad between the frontal, occipital, and temporal lobes, thus providing a middle ground for multimodal sensory integration. Functionally, it supports higher cognitive functions that are characteristic of the human species, such as mathematical cognition, semantic and pragmatic aspects of language, and abstract thinking. Despite its importance, a comprehensive comparison of human and simian intraparietal networks is missing.

    In this study, we used diffusion imaging tractography to reconstruct the major intralobar parietal tracts in twenty-one datasets acquired in vivo from healthy human subjects and eleven ex vivo datasets from five vervet and six macaque monkeys. Three regions of interest (postcentral gyrus, superior parietal lobule and inferior parietal lobule) were used to identify the tracts. Surface projections were reconstructed for both species and results compared to identify similarities or differences in tract anatomy (i.e., trajectories and cortical projections). In addition, post-mortem dissections were performed in a human brain.

    The largest tract identified in both human and monkey brains is a vertical pathway between the superior and inferior parietal lobules. This tract can be divided into an anterior (supramarginal gyrus) and a posterior (angular gyrus) component in both humans and monkey brains. The second prominent intraparietal tract connects the postcentral gyrus to both supramarginal and angular gyri of the inferior parietal lobule in humans but only to the supramarginal gyrus in the monkey brain. The third tract connects the postcentral gyrus to the anterior region of the superior parietal lobule and is more prominent in monkeys compared to humans. Finally, short U-shaped fibres in the medial and lateral aspects of the parietal lobe were identified in both species. A tract connecting the medial parietal cortex to the lateral inferior parietal cortex was observed in the monkey brain only.

    Our findings suggest a consistent pattern of intralobar parietal connections between humans and monkeys with some differences for those areas that have cytoarchitectonically distinct features in humans. The overall pattern of intraparietal connectivity supports the special role of the inferior parietal lobule in cognitive functions characteristic of humans.
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2023). Ten-month-old infants’ neural tracking of naturalistic speech is not facilitated by the speaker’s eye gaze. Developmental Cognitive Neuroscience, 64: 101297. doi:10.1016/j.dcn.2023.101297.

    Abstract

    Eye gaze is a powerful ostensive cue in infant-caregiver interactions, with demonstrable effects on language acquisition. While the link between gaze following and later vocabulary is well-established, the effects of eye gaze on other aspects of language, such as speech processing, are less clear. In this EEG study, we examined the effects of the speaker’s eye gaze on ten-month-old infants’ neural tracking of naturalistic audiovisual speech, a marker for successful speech processing. Infants watched videos of a speaker telling stories, addressing the infant with direct or averted eye gaze. We assessed infants’ speech-brain coherence at stress (1–1.75 Hz) and syllable (2.5–3.5 Hz) rates, tested for differences in attention by comparing looking times and EEG theta power in the two conditions, and investigated whether neural tracking predicts later vocabulary. Our results showed that infants’ brains tracked the speech rhythm both at the stress and syllable rates, and that infants’ neural tracking at the syllable rate predicted later vocabulary. However, speech-brain coherence did not significantly differ between direct and averted gaze conditions and infants did not show greater attention to direct gaze. Overall, our results suggest significant neural tracking at ten months, related to vocabulary development, but not modulated by speaker’s gaze.

    Additional information

    supplementary material
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2024). Does the speaker’s eye gaze facilitate infants’ word segmentation from continuous speech? An ERP study. Developmental Science, 27(2): e13436. doi:10.1111/desc.13436.

    Abstract

    The environment in which infants learn language is multimodal and rich with social cues. Yet, the effects of such cues, such as eye contact, on early speech perception have not been closely examined. This study assessed the role of ostensive speech, signalled through the speaker's eye gaze direction, on infants’ word segmentation abilities. A familiarisation-then-test paradigm was used while electroencephalography (EEG) was recorded. Ten-month-old Dutch-learning infants were familiarised with audio-visual stories in which a speaker recited four sentences with one repeated target word. The speaker addressed them either with direct or with averted gaze while speaking. In the test phase following each story, infants heard familiar and novel words presented via audio-only. Infants’ familiarity with the words was assessed using event-related potentials (ERPs). As predicted, infants showed a negative-going ERP familiarity effect to the isolated familiarised words relative to the novel words over the left-frontal region of interest during the test phase. While the word familiarity effect did not differ as a function of the speaker's gaze over the left-frontal region of interest, there was also a (not predicted) positive-going early ERP familiarity effect over right fronto-central and central electrodes in the direct gaze condition only. This study provides electrophysiological evidence that infants can segment words from audio-visual speech, regardless of the ostensiveness of the speaker's communication. However, the speaker's gaze direction seems to influence the processing of familiar words.
  • Çetinçelik, M., Jordan‐Barros, A., Rowland, C. F., & Snijders, T. M. (2024). The effect of visual speech cues on neural tracking of speech in 10‐month‐old infants. European Journal of Neuroscience. Advance online publication. doi:10.1111/ejn.16492.

    Abstract

    While infants' sensitivity to visual speech cues and the benefit of these cues have been well-established by behavioural studies, there is little evidence on the effect of visual speech cues on infants' neural processing of continuous auditory speech. In this study, we investigated whether visual speech cues, such as the movements of the lips, jaw, and larynx, facilitate infants' neural speech tracking. Ten-month-old Dutch-learning infants watched videos of a speaker reciting passages in infant-directed speech while electroencephalography (EEG) was recorded. In the videos, either the full face of the speaker was displayed or the speaker's mouth and jaw were masked with a block, obstructing the visual speech cues. To assess neural tracking, speech-brain coherence (SBC) was calculated, focusing particularly on the stress and syllabic rates (1–1.75 and 2.5–3.5 Hz respectively in our stimuli). First, overall, SBC was compared to surrogate data, and then, differences in SBC in the two conditions were tested at the frequencies of interest. Our results indicated that infants show significant tracking at both stress and syllabic rates. However, no differences were identified between the two conditions, meaning that infants' neural tracking was not modulated further by the presence of visual speech cues. Furthermore, we demonstrated that infants' neural tracking of low-frequency information is related to their subsequent vocabulary development at 18 months. Overall, this study provides evidence that infants' neural tracking of speech is not necessarily impaired when visual speech cues are not fully visible and that neural tracking may be a potential mechanism in successful language acquisition.

    Additional information

    supplementary materials
  • Chalfoun, A., Rossi, G., & Stivers, T. (2024). The magic word? Face-work and the functions of 'please' in everyday requests. Social Psychology Quarterly. doi:10.1177/01902725241245141.

    Abstract

    Expressions of politeness such as 'please' are prominent elements of interactional conduct that are explicitly targeted in early socialization and are subject to cultural expectations around socially desirable behavior. Yet their specific interactional functions remain poorly understood. Using conversation analysis supplemented with systematic coding, this study investigates when and where interactants use 'please' in everyday requests. We find that 'please' is rare, occurring in only 7 percent of request attempts. Interactants use 'please' to manage face-threats when a request is ill fitted to its immediate interactional context. Within this, we identify two environments in which 'please' prototypically occurs. First, 'please' is used when the requestee has demonstrated unwillingness to comply. Second, 'please' is used when the request is intrusive due to its incompatibility with the requestee’s engagement in a competing action trajectory. Our findings advance research on politeness and extend Goffman’s theory of face-work, with particular salience for scholarship on request behavior.
  • Chang, F., Tatsumi, T., Hiranuma, Y., & Bannard, C. (2023). Visual heuristics for verb production: Testing a deep‐learning model with experiments in Japanese. Cognitive Science, 47(8): e13324. doi:10.1111/cogs.13324.

    Abstract

    Tense/aspect morphology on verbs is often thought to depend on event features like telicity, but it is not known how speakers identify these features in visual scenes. To examine this question, we asked Japanese speakers to describe computer-generated animations of simple actions with variation in visual features related to telicity. Experiments with adults and children found that they could use goal information in the animations to select appropriate past and progressive verb forms. They also produced a large number of different verb forms. To explain these findings, a deep-learning model of verb production from visual input was created that could produce a human-like distribution of verb forms. It was able to use visual cues to select appropriate tense/aspect morphology. The model predicted that video duration would be related to verb complexity, and past tense production would increase when it received the endpoint as input. These predictions were confirmed in a third study with Japanese adults. This work suggests that verb production could be tightly linked to visual heuristics that support the understanding of events.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Chen, X. S., Reader, R. H., Hoischen, A., Veltman, J. A., Simpson, N. H., Francks, C., Newbury, D. F., & Fisher, S. E. (2017). Next-generation DNA sequencing identifies novel gene variants and pathways involved in specific language impairment. Scientific Reports, 7: 46105. doi:10.1038/srep46105.

    Abstract

    A significant proportion of children have unexplained problems acquiring proficient linguistic skills despite adequate intelligence and opportunity. Developmental language disorders are highly heritable with substantial societal impact. Molecular studies have begun to identify candidate loci, but much of the underlying genetic architecture remains undetermined. We performed whole-exome sequencing of 43 unrelated probands affected by severe specific language impairment, followed by independent validations with Sanger sequencing, and analyses of segregation patterns in parents and siblings, to shed new light on aetiology. By first focusing on a pre-defined set of known candidates from the literature, we identified potentially pathogenic variants in genes already implicated in diverse language-related syndromes, including ERC1, GRIN2A, and SRPX2. Complementary analyses suggested novel putative candidates carrying validated variants which were predicted to have functional effects, such as OXR1, SCN9A and KMT2D. We also searched for potential “multiple-hit” cases; one proband carried a rare AUTS2 variant in combination with a rare inherited haplotype affecting STARD9, while another carried a novel nonsynonymous variant in SEMA6D together with a rare stop-gain in SYNPR. On broadening scope to all rare and novel variants throughout the exomes, we identified biological themes that were enriched for such variants, including microtubule transport and cytoskeletal regulation.
  • Chen, A., Çetinçelik, M., Roncaglia-Denissen, M. P., & Sadakata, M. (2023). Native language, L2 experience, and pitch processing in music. Linguistic Approaches to Bilingualism, 13(2), 218-237. doi:10.1075/lab.20030.che.

    Abstract

    The current study investigated how the role of pitch in one’s native language and L2 experience influenced musical melodic processing by testing Turkish and Mandarin Chinese advanced and beginning learners of English as an L2. Pitch has a lower functional load and shows a simpler pattern in Turkish than in Chinese as the former only contrasts between presence and the absence of pitch elevation, while the latter makes use of four different pitch contours lexically. Using the Musical Ear Test as the tool, we found that the Chinese listeners outperformed the Turkish listeners, and the advanced L2 learners outperformed the beginning learners. The Turkish listeners were further tested on their discrimination of bisyllabic Chinese lexical tones, and again an L2 advantage was observed. No significant difference was found for working memory between the beginning and advanced L2 learners. These results suggest that richness of tonal inventory of the native language is essential for triggering a music processing advantage, and on top of the tone language advantage, the L2 experience yields a further enhancement. Yet, unlike the tone language advantage that seems to relate to pitch expertise, learning an L2 seems to improve sound discrimination in general, and such improvement exhibits in non-native lexical tone discrimination.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, S.-J., Brown-Schmidt, S., Clough, S., & Duff, M. C. (2024). Comparing Functional Trend and Learning among Groups in Intensive Binary Longitudinal Eye-Tracking Data using By-Variable Smooth Functions of GAMM. Psychometrika. Advance online publication. doi:10.1007/s11336-024-09986-1.

    Abstract

    This paper presents a model specification for group comparisons regarding a functional trend over time within a trial and learning across a series of trials in intensive binary longitudinal eye-tracking data. The functional trend and learning effects are modeled using by-variable smooth functions. This model specification is formulated as a generalized additive mixed model, which allowed for the use of the freely available mgcv package (Wood in Package ‘mgcv.’ https://cran.r-project.org/web/packages/mgcv/mgcv.pdf, 2023) in R. The model specification was applied to intensive binary longitudinal eye-tracking data, where the questions of interest concern differences between individuals with and without brain injury in their real-time language comprehension and how this affects their learning over time. The results of the simulation study show that the model parameters are recovered well and the by-variable smooth functions are adequately predicted in the same condition as those found in the application.
  • Choi, J., Cutler, A., & Broersma, M. (2017). Early development of abstract language knowledge: Evidence from perception-production transfer of birth-language memory. Royal Society Open Science, 4: 160660. doi:10.1098/rsos.160660.

    Abstract

    Children adopted early in life into another linguistic community typically forget their birth language but retain, unaware, relevant linguistic knowledge that may facilitate (re)learning of birth-language patterns. Understanding the nature of this knowledge can shed light on how language is acquired. Here, international adoptees from Korea with Dutch as their current language, and matched Dutch-native controls, provided speech production data on a Korean consonantal distinction unlike any Dutch distinctions, at the outset and end of an intensive perceptual training. The productions, elicited in a repetition task, were identified and rated by Korean listeners. Adoptees' production scores improved significantly more across the training period than control participants' scores, and, for adoptees only, relative production success correlated significantly with the rate of learning in perception (which had, as predicted, also surpassed that of the controls). Of the adoptee group, half had been adopted at 17 months or older (when talking would have begun), while half had been prelinguistic (under six months). The former group, with production experience, showed no advantage over the group without. Thus the adoptees' retained knowledge of Korean transferred from perception to production and appears to be abstract in nature rather than dependent on the amount of experience.
  • Choi, J., Broersma, M., & Cutler, A. (2017). Early phonology revealed by international adoptees' birth language retention. Proceedings of the National Academy of Sciences of the United States of America, 114(28), 7307-7312. doi:10.1073/pnas.1706405114.

    Abstract

    Until at least 6 mo of age, infants show good discrimination for familiar phonetic contrasts (i.e., those heard in the environmental language) and contrasts that are unfamiliar. Adult-like discrimination (significantly worse for nonnative than for native contrasts) appears only later, by 9–10 mo. This has been interpreted as indicating that infants have no knowledge of phonology until vocabulary development begins, after 6 mo of age. Recently, however, word recognition has been observed before age 6 mo, apparently decoupling the vocabulary and phonology acquisition processes. Here we show that phonological acquisition is also in progress before 6 mo of age. The evidence comes from retention of birth-language knowledge in international adoptees. In the largest ever such study, we recruited 29 adult Dutch speakers who had been adopted from Korea when young and had no conscious knowledge of Korean language at all. Half were adopted at age 3–5 mo (before native-specific discrimination develops) and half at 17 mo or older (after word learning has begun). In a short intensive training program, we observe that adoptees (compared with 29 matched controls) more rapidly learn tripartite Korean consonant distinctions without counterparts in their later-acquired Dutch, suggesting that the adoptees retained phonological knowledge about the Korean distinction. The advantage is equivalent for the younger-adopted and the older-adopted groups, and both groups not only acquire the tripartite distinction for the trained consonants but also generalize it to untrained consonants. Although infants younger than 6 mo can still discriminate unfamiliar phonetic distinctions, this finding indicates that native-language phonological knowledge is nonetheless being acquired at that age.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Ciulkinyte, A., Mountford, H. S., Fontanillas, P., 23andMe Research Team, Bates, T. C., Martin, N. G., Fisher, S. E., & Luciano, M. (2024). Genetic neurodevelopmental clustering and dyslexia. Molecular Psychiatry. Advance online publication. doi:10.1038/s41380-024-02649-8.

    Abstract

    Dyslexia is a learning difficulty with neurodevelopmental origins, manifesting as reduced accuracy and speed in reading and spelling. It is substantially heritable and frequently co-occurs with other neurodevelopmental conditions, particularly attention deficit-hyperactivity disorder (ADHD). Here, we investigate the genetic structure underlying dyslexia and a range of psychiatric traits using results from genome-wide association studies of dyslexia, ADHD, autism, anorexia nervosa, anxiety, bipolar disorder, major depressive disorder, obsessive compulsive disorder,
    schizophrenia, and Tourette syndrome. Genomic Structural Equation Modelling (GenomicSEM) showed heightened support for a model consisting of five correlated latent genomic factors described as: F1) compulsive disorders (including obsessive-compulsive disorder, anorexia nervosa, Tourette syndrome), F2) psychotic disorder (including bipolar disorder, schizophrenia), F3) internalising disorders (including anxiety disorder, major depressive disorder), F4) neurodevelopmental traits (including autism, ADHD), and F5) attention and learning difficulties (including ADHD, dyslexia). ADHD loaded more strongly on the attention and learning difficulties latent factor (F5) than on the neurodevelopmental traits latent factor (F4). The attention and learning difficulties latent factor (F5) was positively correlated with internalising disorders (.40), neurodevelopmental traits (.25) and psychotic disorders (.17) latent factors, and negatively correlated with the compulsive disorders (–.16) latent factor. These factor correlations are mirrored in genetic correlations observed between the attention and learning difficulties latent factor and other cognitive, psychological and wellbeing traits. We further investigated genetic variants underlying both dyslexia and ADHD, which implicated 49 loci (40 not previously found in GWAS of the individual traits) mapping to 174 genes (121 not found in GWAS of individual traits) as potential pleiotropic variants. Our study confirms the increased genetic relation between dyslexia and ADHD versus other psychiatric traits and uncovers novel pleiotropic variants affecting both traits. In future, analyses including additional co-occurring traits such as dyscalculia and dyspraxia will allow a clearer definition of the attention and learning difficulties latent factor, yielding further insights into factor structure and pleiotropic effects.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Clough, S., Morrow, E., Mutlu, B., Turkstra, L., & Duff, M. C. C. (2023). Emotion recognition of faces and emoji in individuals with moderate-severe traumatic brain injury. Brain Injury, 37(7), 596-610. doi:10.1080/02699052.2023.2181401.

    Abstract

    Background. Facial emotion recognition deficits are common after moderate-severe traumatic brain injury (TBI) and linked to poor social outcomes. We examine whether emotion recognition deficits extend to facial expressions depicted by emoji.
    Methods. Fifty-one individuals with moderate-severe TBI (25 female) and fifty-one neurotypical peers (26 female) viewed photos of human faces and emoji. Participants selected the best-fitting label from a set of basic emotions (anger, disgust, fear, sadness, neutral, surprise, happy) or social emotions (embarrassed, remorseful, anxious, neutral, flirting, confident, proud).
    Results. We analyzed the likelihood of correctly labeling an emotion by group (neurotypical, TBI), stimulus condition (basic faces, basic emoji, social emoji), sex (female, male), and their interactions. Participants with TBI did not significantly differ from neurotypical peers in overall emotion labeling accuracy. Both groups had poorer labeling accuracy for emoji compared to faces. Participants with TBI (but not neurotypical peers) had poorer accuracy for labeling social emotions depicted by emoji compared to basic emotions depicted by emoji. There were no effects of participant sex.
    Discussion. Because emotion representation is more ambiguous in emoji than human faces, studying emoji use and perception in TBI is an important consideration for understanding functional communication and social participation after brain injury.
  • Clough, S., Padilla, V.-G., Brown-Schmidt, S., & Duff, M. C. (2023). Intact speech-gesture integration in narrative recall by adults with moderate-severe traumatic brain injury. Neuropsychologia, 189: 108665. doi:10.1016/j.neuropsychologia.2023.108665.

    Abstract

    Purpose

    Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying “He searched for a new recipe” while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays.

    Methods

    60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., “He searched for a new recipe”), a Gesture Match (e.g., “He searched for a new recipe online), or Other (“He looked for a new recipe”). We also examined whether participants produced representative gestures themselves when retelling these details.

    Results

    Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story.

    Conclusion

    We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population.
  • Clough, S., Tanguay, A. F. N., Mutlu, B., Turkstra, L., & Duff, M. C. (2023). How do individuals with and without traumatic brain injury interpret emoji? Similarities and differences in perceived valence, arousal, and emotion representation. Journal of Nonverbal Communication, 47, 489-511. doi:10.1007/s10919-023-00433-w.

    Abstract

    Impaired facial affect recognition is common after traumatic brain injury (TBI) and linked to poor social outcomes. We explored whether perception of emotions depicted by emoji is also impaired after TBI. Fifty participants with TBI and 50 non-injured peers generated free-text labels to describe emotions depicted by emoji and rated their levels of valence and arousal on nine-point rating scales. We compared how the two groups’ valence and arousal ratings were clustered and examined agreement in the words participants used to describe emoji. Hierarchical clustering of affect ratings produced four emoji clusters in the non-injured group and three emoji clusters in the TBI group. Whereas the non-injured group had a strongly positive and a moderately positive cluster, the TBI group had a single positive valence cluster, undifferentiated by arousal. Despite differences in cluster numbers, hierarchical structures of the two groups’ emoji ratings were significantly correlated. Most emoji had high agreement in the words participants with and without TBI used to describe them. Participants with TBI perceived emoji similarly to non-injured peers, used similar words to describe emoji, and rated emoji similarly on the valence dimension. Individuals with TBI showed small differences in perceived arousal for a minority of emoji. Overall, results suggest that basic recognition processes do not explain challenges in computer-mediated communication reported by adults with TBI. Examining perception of emoji in context by people with TBI is an essential next step for advancing our understanding of functional communication in computer-mediated contexts after brain injury.

    Additional information

    supplementary information
  • Coco, M. I., Araujo, S., & Petersson, K. M. (2017). Disentangling stimulus plausibility and contextual congruency: Electro-physiological evidence for differential cognitive dynamics. Neuropsychologia, 96, 150-163. doi:10.1016/j.neuropsychologia.2016.12.008.

    Abstract

    Expectancy mechanisms are routinely used by the cognitive system in stimulus processing and in anticipation of appropriate responses. Electrophysiology research has documented negative shifts of brain activity when expectancies are violated within a local stimulus context (e.g., reading an implausible word in a sentence) or more globally between consecutive stimuli (e.g., a narrative of images with an incongruent end). In this EEG study, we examine the interaction between expectancies operating at the level of stimulus plausibility and at more global level of contextual congruency to provide evidence for, or against, a disassociation of the underlying processing mechanisms. We asked participants to verify the congruency of pairs of cross-modal stimuli (a sentence and a scene), which varied in plausibility. ANOVAs on ERP amplitudes in selected windows of interest show that congruency violation has longer-lasting (from 100 to 500 ms) and more widespread effects than plausibility violation (from 200 to 400 ms). We also observed critical interactions between these factors, whereby incongruent and implausible pairs elicited stronger negative shifts than their congruent counterpart, both early on (100–200 ms) and between 400–500 ms. Our results suggest that the integration mechanisms are sensitive to both global and local effects of expectancy in a modality independent manner. Overall, we provide novel insights into the interdependence of expectancy during meaning integration of cross-modal stimuli in a verification task
  • Collins, J. (2017). Real and spurious correlations involving tonal languages. In N. J. Enfield (Ed.), Dependencies in language: On the causal ontology of linguistics systems (pp. 129-139). Berlin: Language Science Press.
  • Connine, C. M., Clifton, Jr., C., & Cutler, A. (1987). Effects of lexical stress on phonetic categorization. Phonetica, 44, 133-146.
  • Coopmans, C. W., Struiksma, M. E., Coopmans, P. H. A., & Chen, A. (2023). Processing of grammatical agreement in the face of variation in lexical stress: A mismatch negativity study. Language and Speech, 66(1), 202-213. doi:10.1177/00238309221098116.

    Abstract

    Previous electroencephalography studies have yielded evidence for automatic processing of syntax and lexical stress. However, these studies looked at both effects in isolation, limiting their generalizability to everyday language comprehension. In the current study, we investigated automatic processing of grammatical agreement in the face of variation in lexical stress. Using an oddball paradigm, we measured the Mismatch Negativity (MMN) in Dutch-speaking participants while they listened to Dutch subject–verb sequences (linguistic context) or acoustically similar sequences in which the subject was replaced by filtered noise (nonlinguistic context). The verb forms differed in the inflectional suffix, rendering the subject–verb sequences grammatically correct or incorrect, and leading to a difference in the stress pattern of the verb forms. We found that the MMNs were modulated in both the linguistic and nonlinguistic condition, suggesting that the processing load induced by variation in lexical stress can hinder early automatic processing of grammatical agreement. However, as the morphological differences between the verb forms correlated with differences in number of syllables, an interpretation in terms of the prosodic structure of the sequences cannot be ruled out. Future research is needed to determine which of these factors (i.e., lexical stress, syllabic structure) most strongly modulate early syntactic processing.

    Additional information

    supplementary material
  • Coopmans, C. W., Mai, A., Slaats, S., Weissbart, H., & Martin, A. E. (2023). What oscillations can do for syntax depends on your theory of structure building. Nature Reviews Neuroscience, 24, 723. doi:10.1038/s41583-023-00734-5.
  • Coopmans, C. W., Kaushik, K., & Martin, A. E. (2023). Hierarchical structure in language and action: A formal comparison. Psychological Review, 130(4), 935-952. doi:10.1037/rev0000429.

    Abstract

    Since the cognitive revolution, language and action have been compared as cognitive systems, with cross-domain convergent views recently gaining renewed interest in biology, neuroscience, and cognitive science. Language and action are both combinatorial systems whose mode of combination has been argued to be hierarchical, combining elements into constituents of increasingly larger size. This structural similarity has led to the suggestion that they rely on shared cognitive and neural resources. In this article, we compare the conceptual and formal properties of hierarchy in language and action using set theory. We show that the strong compositionality of language requires a particular formalism, a magma, to describe the algebraic structure corresponding to the set of hierarchical structures underlying sentences. When this formalism is applied to actions, it appears to be both too strong and too weak. To overcome these limitations, which are related to the weak compositionality and sequential nature of action structures, we formalize the algebraic structure corresponding to the set of actions as a trace monoid. We aim to capture the different system properties of language and action in terms of the distinction between hierarchical sets and hierarchical sequences and discuss the implications for the way both systems could be represented in the brain.
  • Coopmans, C. W., Mai, A., & Martin, A. E. (2024). “Not” in the brain and behavior. PLOS Biology, 22: e3002656. doi:10.1371/journal.pbio.3002656.
  • Cornelis, S. S., IntHout, J., Runhart, E. H., Grunewald, O., Lin, S., Corradi, Z., Khan, M., Hitti-Malin, R. J., Whelan, L., Farrar, G. J., Sharon, D., Van den Born, L. I., Arno, G., Simcoe, M., Michaelides, M., Webster, A. R., Roosing, S., Mahroo, O. A., Dhaenens, C.-M., Cremers, F. P. M. Cornelis, S. S., IntHout, J., Runhart, E. H., Grunewald, O., Lin, S., Corradi, Z., Khan, M., Hitti-Malin, R. J., Whelan, L., Farrar, G. J., Sharon, D., Van den Born, L. I., Arno, G., Simcoe, M., Michaelides, M., Webster, A. R., Roosing, S., Mahroo, O. A., Dhaenens, C.-M., Cremers, F. P. M., & ABCA4 Study Group (2024). Representation of women among individuals with mild variants in ABCA4-associated retinopathy: A meta-analysis. JAMA Ophthalmology, 142(5), 463-471. doi:10.1001/jamaophthalmol.2024.0660.

    Abstract

    Importance
    Previous studies indicated that female sex might be a modifier in Stargardt disease, which is an ABCA4-associated retinopathy.

    Objective
    To investigate whether women are overrepresented among individuals with ABCA4-associated retinopathy who are carrying at least 1 mild allele or carrying nonmild alleles.

    Data Sources
    Literature data, data from 2 European centers, and a new study. Data from a Radboudumc database and from the Rotterdam Eye Hospital were used for exploratory hypothesis testing.

    Study Selection
    Studies investigating the sex ratio in individuals with ABCA4-AR and data from centers that collected ABCA4 variant and sex data. The literature search was performed on February 1, 2023; data from the centers were from before 2023.

    Data Extraction and Synthesis
    Random-effects meta-analyses were conducted to test whether the proportions of women among individuals with ABCA4-associated retinopathy with mild and nonmild variants differed from 0.5, including subgroup analyses for mild alleles. Sensitivity analyses were performed excluding data with possibly incomplete variant identification. χ2 Tests were conducted to compare the proportions of women in adult-onset autosomal non–ABCA4-associated retinopathy and adult-onset ABCA4-associated retinopathy and to investigate if women with suspected ABCA4-associated retinopathy are more likely to obtain a genetic diagnosis. Data analyses were performed from March to October 2023.

    Main Outcomes and Measures
    Proportion of women per ABCA4-associated retinopathy group. The exploratory testing included sex ratio comparisons for individuals with ABCA4-associated retinopathy vs those with other autosomal retinopathies and for individuals with ABCA4-associated retinopathy who underwent genetic testing vs those who did not.

    Results
    Women were significantly overrepresented in the mild variant group (proportion, 0.59; 95% CI, 0.56-0.62; P < .001) but not in the nonmild variant group (proportion, 0.50; 95% CI, 0.46-0.54; P = .89). Sensitivity analyses confirmed these results. Subgroup analyses on mild variants showed differences in the proportions of women. Furthermore, in the Radboudumc database, the proportion of adult women among individuals with ABCA4-associated retinopathy (652/1154 = 0.56) was 0.10 (95% CI, 0.05-0.15) higher than among individuals with other retinopathies (280/602 = 0.47).

    Conclusions and Relevance
    This meta-analysis supports the likelihood that sex is a modifier in developing ABCA4-associated retinopathy for individuals with a mild ABCA4 allele. This finding may be relevant for prognosis predictions and recurrence risks for individuals with ABCA4-associated retinopathy. Future studies should further investigate whether the overrepresentation of women is caused by differences in the disease mechanism, by differences in health care–seeking behavior, or by health care discrimination between women and men with ABCA4-AR.
  • Corps, R. E., Liao, M., & Pickering, M. J. (2023). Evidence for two stages of prediction in non-native speakers: A visual-world eye-tracking study. Bilingualism: Language and Cognition, 26(1), 231-243. doi:10.1017/S1366728922000499.

    Abstract

    Comprehenders predict what a speaker is likely to say when listening to non-native (L2) and native (L1) utterances. But what are the characteristics of L2 prediction, and how does it relate to L1 prediction? We addressed this question in a visual-world eye-tracking experiment, which tested when L2 English comprehenders integrated perspective into their predictions. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress; distractor: hairdryer) objects. Participants predicted associatively, fixating objects semantically associated with critical verbs (here, the tie and the dress). They also predicted stereotypically consistent objects (e.g., the tie rather than the dress, given the male speaker). Consistent predictions were made later than associative predictions, and were delayed for L2 speakers relative to L1 speakers. These findings suggest prediction involves both automatic and non-automatic stages.
  • Corps, R. E. (2023). What do we know about the mechanisms of response planning in dialog? In Psychology of Learning and Motivation (pp. 41-81). doi:10.1016/bs.plm.2023.02.002.

    Abstract

    During dialog, interlocutors take turns at speaking with little gap or overlap between their contributions. But language production in monolog is comparatively slow. Theories of dialog tend to agree that interlocutors manage these timing demands by planning a response early, before the current speaker reaches the end of their turn. In the first half of this chapter, I review experimental research supporting these theories. But this research also suggests that planning a response early, while simultaneously comprehending, is difficult. Does response planning need to be this difficult during dialog? In other words, is early-planning always necessary? In the second half of this chapter, I discuss research that suggests the answer to this question is no. In particular, corpora of natural conversation demonstrate that speakers do not directly respond to the immediately preceding utterance of their partner—instead, they continue an utterance they produced earlier. This parallel talk likely occurs because speakers are highly incremental and plan only part of their utterance before speaking, leading to pauses, hesitations, and disfluencies. As a result, speakers do not need to engage in extensive advance planning. Thus, laboratory studies do not provide a full picture of language production in dialog, and further research using naturalistic tasks is needed.
  • Corps, R. E., & Meyer, A. S. (2023). Word frequency has similar effects in picture naming and gender decision: A failure to replicate Jescheniak and Levelt (1994). Acta Psychologica, 241: 104073. doi:10.1016/j.actpsy.2023.104073.

    Abstract

    Word frequency plays a key role in theories of lexical access, which assume that the word frequency effect (WFE, faster access to high-frequency than low-frequency words) occurs as a result of differences in the representation and processing of the words. In a seminal paper, Jescheniak and Levelt (1994) proposed that the WFE arises during the retrieval of word forms, rather than the retrieval of their syntactic representations (their lemmas) or articulatory commands. An important part of Jescheniak and Levelt's argument was that they found a stable WFE in a picture naming task, which requires complete lexical access, but not in a gender decision task, which only requires access to the words' lemmas and not their word forms. We report two attempts to replicate this pattern, one with new materials, and one with Jescheniak and Levelt's orginal pictures. In both studies we found a strong WFE when the pictures were shown for the first time, but much weaker effects on their second and third presentation. Importantly these patterns were seen in both the picture naming and the gender decision tasks, suggesting that either word frequency does not exclusively affect word form retrieval, or that the gender decision task does not exclusively tap lemma access.

    Additional information

    raw data and analysis scripts
  • Corps, R. E., Yang, F., & Pickering, M. (2023). Evidence against egocentric prediction during language comprehension. Royal Society Open Science, 10(12): 231252. doi:10.1098/rsos.231252.

    Abstract

    Although previous research has demonstrated that language comprehension can be egocentric, there is little evidence for egocentricity during prediction. In particular, comprehenders do not appear to predict egocentrically when the context makes it clear what the speaker is likely to refer to. But do comprehenders predict egocentrically when the context does not make it clear? We tested this hypothesis using a visual-world eye-tracking paradigm, in which participants heard sentences containing the gender-neutral pronoun They (e.g. They would like to wear…) while viewing four objects (e.g. tie, dress, drill, hairdryer). Two of these objects were plausible targets of the verb (tie and dress), and one was stereotypically compatible with the participant's gender (tie if the participant was male; dress if the participant was female). Participants rapidly fixated targets more than distractors, but there was no evidence that participants ever predicted egocentrically, fixating objects stereotypically compatible with their own gender. These findings suggest that participants do not fall back on their own egocentric perspective when predicting, even when they know that context does not make it clear what the speaker is likely to refer to.
  • Corps, R. E., & Pickering, M. (2024). Response planning during question-answering: Does deciding what to say involve deciding how to say it? Psychonomic Bulletin & Review, 31, 839-848. doi:10.3758/s13423-023-02382-3.

    Abstract

    To answer a question, speakers must determine their response and formulate it in words. But do they decide on a response before formulation, or do they formulate different potential answers before selecting one? We addressed this issue in a verbal question-answering experiment. Participants answered questions more quickly when they had one potential answer (e.g., Which tourist attraction in Paris is very tall?) than when they had multiple potential answers (e.g., What is the name of a Shakespeare play?). Participants also answered more quickly when the set of potential answers were on average short rather than long, regardless of whether there was only one or multiple potential answers. Thus, participants were not affected by the linguistic complexity of unselected but plausible answers. These findings suggest that participants select a single answer before formulation.
  • Corps, R. E., & Pickering, M. (2024). The role of answer content and length when preparing answers to questions. Scientific Reports, 14: 17110. doi:10.1038/s41598-024-68253-6.

    Abstract

    Research suggests that interlocutors manage the timing demands of conversation by preparing what they want to say early. In three experiments, we used a verbal question-answering task to investigate what aspects of their response speakers prepare early. In all three experiments, participants answered more quickly when the critical content (here, barks) necessary for answer preparation occurred early (e.g., Which animal barks and is also a common household pet?) rather than late (e.g., Which animal is a common household pet and also barks?). In the individual experiments, we found no convincing evidence that participants were slower to produce longer answers, consisting of multiple words, than shorter answers, consisting of a single word. There was also no interaction between these two factors. A combined analysis of the first two experiments confirmed this lack of interaction, and demonstrated that participants were faster to answer questions when the critical content was available early rather than late and when the answer was short rather than long. These findings provide tentative evidence for an account in which interlocutors prepare the content of their answer as soon as they can, but sometimes do not prepare its length (and thus form) until they are ready to speak.

    Additional information

    supplementary tables
  • Corps, R. E., & Meyer, A. S. (2024). The influence of familiarisation and item repetition on the name agreement effect in picture naming. Quarterly Journal of Experimental Psychology. Advance online publication. doi:10.1177/17470218241274661.

    Abstract

    Name agreement (NA) refers to the degree to which speakers agree on a picture’s name. A robust finding is that speakers are faster to name pictures with high agreement (HA) than those with low agreement (LA). This NA effect is thought to occur because LA pictures strongly activate several names, and so speakers need time to select one. HA pictures, in contrast, strongly activate a single name and so there is no need to select one name out of several alternatives. Recent models of lexical access suggest that the structure of the mental lexicon changes with experience. Thus, speakers should consider a range of names when naming LA pictures, but the extent to which they consider each of these names should change with experience. We tested these hypotheses in two picture-naming experiments. In Experiment 1, participants were faster to name LA than HA pictures when they named each picture once. Importantly, they were faster to produce modal names (provided by most participants) than alternative names for LA pictures, consistent with the view that speakers activate multiple names for LA pictures. In Experiment 2, participants were familiarised with the modal name before the experiment and named each picture three times. Although there was still an NA effect when participants named the pictures the first time, it was reduced in comparison to Experiment 1 and was further reduced with each picture repetition.Thus, familiarisation and repetition reduced the NA effect, but did not eliminate it, suggesting speakers activate a range of plausible names.
  • Corradi, Z., Khan, M., Hitti-Malin, R., Mishra, K., Whelan, L., Cornelis, S. S., ABCA4-Study Group, Hoyng, C. B., Kämpjärvi, K., Klaver, C. C. W., Liskova, P., Stohr, H., Weber, B. H. F., Banfi, S., Farrar, G. J., Sharon, D., Zernant, J., Allikmets, R., Dhaenens, C.-M., & Cremers, F. P. M. (2023). Targeted sequencing and in vitro splice assays shed light on ABCA4-associated retinopathies missing heritability. Human Genetics and Genomics Advances, 4(4): 100237. doi:10.1016/j.xhgg.2023.100237.

    Abstract

    The ABCA4 gene is the most frequently mutated Mendelian retinopathy-associated gene. Biallelic variants lead to a variety of phenotypes, however, for thousands of cases the underlying variants remain unknown. Here, we aim to shed further light on the missing heritability of ABCA4-associated retinopathy by analyzing a large cohort of macular dystrophy probands. A total of 858 probands were collected from 26 centers, of whom 722 carried no or one pathogenic ABCA4 variant while 136 cases carried two ABCA4 alleles, one of which was a frequent mild variant, suggesting that deep-intronic variants (DIVs) or other cis-modifiers might have been missed. After single molecule molecular inversion probes (smMIPs)-based sequencing of the complete 128-kb ABCA4 locus, the effect of putative splice variants was assessed in vitro by midigene splice assays in HEK293T cells. The breakpoints of copy number variants (CNVs) were determined by junction PCR and Sanger sequencing. ABCA4 sequence analysis solved 207/520 (39.8%) naïve or unsolved cases and 70/202 (34.7%) monoallelic cases, while additional causal variants were identified in 54/136 (39.7%) of probands carrying two variants. Seven novel DIVs and six novel non-canonical splice site variants were detected in a total of 35 alleles and characterized, including the c.6283-321C>G variant leading to a complex splicing defect. Additionally, four novel CNVs were identified and characterized in five alleles. These results confirm that smMIPs-based sequencing of the complete ABCA4 gene provides a cost-effective method to genetically solve retinopathy cases and that several rare structural and splice altering defects remain undiscovered in STGD1 cases.
  • Cortázar-Chinarro, M., Lattenkamp, E. Z., Meyer-Lucht, Y., Luquet, E., Laurila, A., & Höglund, J. (2017). Drift, selection, or migration? Processes affecting genetic differentiation and variation along a latitudinal gradient in an amphibian. BMC Evolutionary Biology, 17: 189. doi:10.1186/s12862-017-1022-z.

    Abstract

    Past events like fluctuations in population size and post-glacial colonization processes may influence the relative importance of genetic drift, migration and selection when determining the present day patterns of genetic variation. We disentangle how drift, selection and migration shape neutral and adaptive genetic variation in 12 moor frog populations along a 1700 km latitudinal gradient. We studied genetic differentiation and variation at a MHC exon II locus and a set of 18 microsatellites.
    Results

    Using outlier analyses, we identified the MHC II exon 2 (corresponding to the β-2 domain) locus and one microsatellite locus (RCO8640) to be subject to diversifying selection, while five microsatellite loci showed signals of stabilizing selection among populations. STRUCTURE and DAPC analyses on the neutral microsatellites assigned populations to a northern and a southern cluster, reflecting two different post-glacial colonization routes found in previous studies. Genetic variation overall was lower in the northern cluster. The signature of selection on MHC exon II was weaker in the northern cluster, possibly as a consequence of smaller and more fragmented populations.
    Conclusion

    Our results show that historical demographic processes combined with selection and drift have led to a complex pattern of differentiation along the gradient where some loci are more divergent among populations than predicted from drift expectations due to diversifying selection, while other loci are more uniform among populations due to stabilizing selection. Importantly, both overall and MHC genetic variation are lower at northern latitudes. Due to lower evolutionary potential, the low genetic variation in northern populations may increase the risk of extinction when confronted with emerging pathogens and climate change.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Coventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D. and 25 moreCoventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D., Pizzuto, G., Serhan, B., Apse, L., Hesse, F., Hoang, L., Hoang, P., Igari, Y., Kapiley, K., Haupt-Khutsishvili, T., Kolding, S., Priiki, K., Mačiukaitytė, I., Mohite, V., Nahkola, T., Tsoi, S. Y., Williams, S., Yasuda, S., Cangelosi, A., Duñabeitia, J. A., Mishra, R. K., Rocca, R., Šķilters, J., Wallentin, M., Žilinskaitė-Šinkūnienė, E., & Incel, O. D. (2023). Spatial communication systems across languages reflect universal action constraints. Nature Human Behaviour, 77, 2099-2110. doi:10.1038/s41562-023-01697-4.

    Abstract

    The extent to which languages share properties reflecting the non-linguistic constraints of the speakers who speak them is key to the debate regarding the relationship between language and cognition. A critical case is spatial communication, where it has been argued that semantic universals should exist, if anywhere. Here, using an experimental paradigm able to separate variation within a language from variation between languages, we tested the use of spatial demonstratives—the most fundamental and frequent spatial terms across languages. In n = 874 speakers across 29 languages, we show that speakers of all tested languages use spatial demonstratives as a function of being able to reach or act on an object being referred to. In some languages, the position of the addressee is also relevant in selecting between demonstrative forms. Commonalities and differences across languages in spatial communication can be understood in terms of universal constraints on action shaping spatial language and cognition.
  • Cox, C., Bergmann, C., Fowler, E., Keren-Portnoy, T., Roepstorff, A., Bryant, G., & Fusaroli, R. (2023). A systematic review and Bayesian meta-analysis of the acoustic features of infant-directed speech. Nature Human Behaviour, 7, 114-133. doi:10.1038/s41562-022-01452-1.

    Abstract

    When speaking to infants, adults often produce speech that differs systematically from that directed to other adults. In order to quantify the acoustic properties of this speech style across a wide variety of languages and cultures, we extracted results from empirical studies on the acoustic features of infant-directed speech (IDS). We analyzed data from 88 unique studies (734 effect sizes) on the following five acoustic parameters that have been systematically examined in the literature: i) fundamental frequency (fo), ii) fo variability, iii) vowel space area, iv) articulation rate, and v) vowel duration. Moderator analyses were conducted in hierarchical Bayesian robust regression models in order to examine how these features change with infant age and differ across languages, experimental tasks and recording environments. The moderator analyses indicated that fo, articulation rate, and vowel duration became more similar to adult-directed speech (ADS) over time, whereas fo variability and vowel space area exhibited stability throughout development. These results point the way for future research to disentangle different accounts of the functions and learnability of IDS by conducting theory-driven comparisons among different languages and using computational models to formulate testable predictions.

    Additional information

    supplementary information
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Creemers, A. (2023). Morphological processing in spoken-word recognition. In D. Crepaldi (Ed.), Linguistic morphology in the mind and brain (pp. 50-64). New York: Routledge.

    Abstract

    Most psycholinguistic studies on morphological processing have examined the role of morphological structure in the visual modality. This chapter discusses morphological processing in the auditory modality, which is an area of research that has only recently received more attention. It first discusses why results in the visual modality cannot straightforwardly be applied to the processing of spoken words, stressing the importance of acknowledging potential modality effects. It then gives a brief overview of the existing research on the role of morphology in the auditory modality, for which an increasing number of studies report that listeners show sensitivity to morphological structure. Finally, the chapter highlights insights gained by looking at morphological processing not only in reading, but also in listening, and it discusses directions for future research
  • Yu, Y., Cui, H., Haas, S. S., New, F., Sanford, N., Yu, K., Zhan, D., Yang, G., Gao, J., Wei, D., Qiu, J., Banaj, N., Boomsma, D. I., Breier, A., Brodaty, H., Buckner, R. L., Buitelaar, J. K., Cannon, D. M., Caseras, X., Clark, V. P. Yu, Y., Cui, H., Haas, S. S., New, F., Sanford, N., Yu, K., Zhan, D., Yang, G., Gao, J., Wei, D., Qiu, J., Banaj, N., Boomsma, D. I., Breier, A., Brodaty, H., Buckner, R. L., Buitelaar, J. K., Cannon, D. M., Caseras, X., Clark, V. P., Conrod, P. J., Crivello, F., Crone, E. A., Dannlowski, U., Davey, C. G., De Haan, L., De Zubicaray, G. I., Di Giorgio, A., Fisch, L., Fisher, S. E., Franke, B., Glahn, D. C., Grotegerd, D., Gruber, O., Gur, R. E., Gur, R. C., Hahn, T., Harrison, B. J., Hatton, S., Hickie, I. B., Hulshoff Pol, H. E., Jamieson, A. J., Jernigan, T. L., Jiang, J., Kalnin, A. J., Kang, S., Kochan, N. A., Kraus, A., Lagopoulos, J., Lazaro, L., McDonald, B. C., McDonald, C., McMahon, K. L., Mwangi, B., Piras, F., Rodriguez‐Cruces, R., Royer, J., Sachdev, P. S., Satterthwaite, T. D., Saykin, A. J., Schumann, G., Sevaggi, P., Smoller, J. W., Soares, J. C., Spalletta, G., Tamnes, C. K., Trollor, J. N., Van't Ent, D., Vecchio, D., Walter, H., Wang, Y., Weber, B., Wen, W., Wierenga, L. M., Williams, S. C. R., Wu, M., Zunta‐Soares, G. B., Bernhardt, B., Thompson, P., Frangou, S., Ge, R., & ENIGMA-Lifespan Working Group (2024). Brain‐age prediction: Systematic evaluation of site effects, and sample age range and size. Human Brain Mapping, 45(10): e26768. doi:10.1002/hbm.26768.

    Abstract

    Structural neuroimaging data have been used to compute an estimate of the biological age of the brain (brain-age) which has been associated with other biologically and behaviorally meaningful measures of brain development and aging. The ongoing research interest in brain-age has highlighted the need for robust and publicly available brain-age models pre-trained on data from large samples of healthy individuals. To address this need we have previously released a developmental brain-age model. Here we expand this work to develop, empirically validate, and disseminate a pre-trained brain-age model to cover most of the human lifespan. To achieve this, we selected the best-performing model after systematically examining the impact of seven site harmonization strategies, age range, and sample size on brain-age prediction in a discovery sample of brain morphometric measures from 35,683 healthy individuals (age range: 5–90 years; 53.59% female). The pre-trained models were tested for cross-dataset generalizability in an independent sample comprising 2101 healthy individuals (age range: 8–80 years; 55.35% female) and for longitudinal consistency in a further sample comprising 377 healthy individuals (age range: 9–25 years; 49.87% female). This empirical examination yielded the following findings: (1) the accuracy of age prediction from morphometry data was higher when no site harmonization was applied; (2) dividing the discovery sample into two age-bins (5–40 and 40–90 years) provided a better balance between model accuracy and explained age variance than other alternatives; (3) model accuracy for brain-age prediction plateaued at a sample size exceeding 1600 participants. These findings have been incorporated into CentileBrain (https://centilebrain.org/#/brainAGE2), an open-science, web-based platform for individualized neuroimaging metrics.
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A., Norris, D., & Williams, J. (1987). A note on the role of phonological expectations in speech segmentation. Journal of Memory and Language, 26, 480-487. doi:10.1016/0749-596X(87)90103-3.

    Abstract

    Word-initial CVC syllables are detected faster in words beginning consonant-vowel-consonant-vowel (CVCV-) than in words beginning consonant-vowel-consonant-consonant (CVCC-). This effect was reported independently by M. Taft and G. Hambly (1985, Journal of Memory and Language, 24, 320–335) and by A. Cutler, J. Mehler, D. Norris, and J. Segui (1986, Journal of Memory and Language, 25, 385–400). Taft and Hambly explained the effect in terms of lexical factors. This explanation cannot account for Cutler et al.'s results, in which the effect also appeared with nonwords and foreign words. Cutler et al. suggested that CVCV-sequences might simply be easier to perceive than CVCC-sequences. The present study confirms this suggestion, and explains it as a reflection of listener expectations constructed on the basis of distributional characteristics of the language.
  • Cutler, A., Mister, E., Norris, D., & Sebastián-Gallés, N. (2004). La perception de la parole en espagnol: Un cas particulier? In L. Ferrand, & J. Grainger (Eds.), Psycholinguistique cognitive: Essais en l'honneur de Juan Segui (pp. 57-74). Brussels: De Boeck.
  • Cutler, A. (1991). Linguistic rhythm and speech segmentation. In J. Sundberg, L. Nord, & R. Carlson (Eds.), Music, language, speech and brain (pp. 157-166). London: Macmillan.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1987). Phoneme identification and the lexicon. Cognitive Psychology, 19, 141-177. doi:10.1016/0010-0285(87)90010-7.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1987). Speaking for listening. In A. Allport, D. MacKay, W. Prinz, & E. Scheerer (Eds.), Language perception and production: Relationships between listening, speaking, reading and writing (pp. 23-40). London: Academic Press.

    Abstract

    Speech production is constrained at all levels by the demands of speech perception. The speaker's primary aim is successful communication, and to this end semantic, syntactic and lexical choices are directed by the needs of the listener. Even at the articulatory level, some aspects of production appear to be perceptually constrained, for example the blocking of phonological distortions under certain conditions. An apparent exception to this pattern is word boundary information, which ought to be extremely useful to listeners, but which is not reliably coded in speech. It is argued that the solution to this apparent problem lies in rethinking the concept of the boundary of the lexical access unit. Speech rhythm provides clear information about the location of stressed syllables, and listeners do make use of this information. If stressed syllables can serve as the determinants of word lexical access codes, then once again speakers are providing precisely the necessary form of speech information to facilitate perception.
  • Cutler, A., Butterfield, S., & Williams, J. (1987). The perceptual integrity of syllabic onsets. Journal of Memory and Language, 26, 406-418. doi:10.1016/0749-596X(87)90099-4.
  • Cutler, A., & Carter, D. (1987). The predominance of strong initial syllables in the English vocabulary. Computer Speech and Language, 2, 133-142. doi:10.1016/0885-2308(87)90004-0.

    Abstract

    Studies of human speech processing have provided evidence for a segmentation strategy in the perception of continuous speech, whereby a word boundary is postulated, and a lexical access procedure initiated, at each metrically strong syllable. The likely success of this strategy was here estimated against the characteristics of the English vocabulary. Two computerized dictionaries were found to list approximately three times as many words beginning with strong syllables (i.e. syllables containing a full vowel) as beginning with weak syllables (i.e. syllables containing a reduced vowel). Consideration of frequency of lexical word occurrence reveals that words beginning with strong syllables occur on average more often than words beginning with weak syllables. Together, these findings motivate an estimate for everyday speech recognition that approximately 85% of lexical words (i.e. excluding function words) will begin with strong syllables. This estimate was tested against a corpus of 190 000 words of spontaneous British English conversion. In this corpus, 90% of lexical words were found to begin with strong syllables. This suggests that a strategy of postulating word boundaries at the onset of strong syllables would have a high success rate in that few actual lexical word onsets would be missed.
  • Cutler, A. (1987). The task of the speaker and the task of the hearer [Commentary/Sperber & Wilson: Relevance]. Behavioral and Brain Sciences, 10, 715-716.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dai, B., McQueen, J. M., Hagoort, P., & Kösem, A. (2017). Pure linguistic interference during comprehension of competing speech signals. The Journal of the Acoustical Society of America, 141, EL249-EL254. doi:10.1121/1.4977590.

    Abstract

    Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at −3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in‐speech comprehension
  • Dalla Bella, S., Farrugia, F., Benoit, C.-E., Begel, V., Verga, L., Harding, E., & Kotz, S. A. (2017). BAASTA: Battery for the Assessment of Auditory Sensorimotor and Timing Abilities. Behavior Research Methods, 49(3), 1128-1145. doi:10.3758/s13428-016-0773-6.

    Abstract

    The Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA) is a new tool for the systematic assessment of perceptual and sensorimotor timing skills. It spans a broad range of timing skills aimed at differentiating individual timing profiles. BAASTA consists of sensitive time perception and production tasks. Perceptual tasks include duration discrimination, anisochrony detection (with tones and music), and a version of the Beat Alignment Task. Perceptual thresholds for duration discrimination and anisochrony detection are estimated with a maximum likelihood procedure (MLP) algorithm. Production tasks use finger tapping and include unpaced and paced tapping (with tones and music), synchronization-continuation, and adaptive tapping to a sequence with a tempo change. BAASTA was tested in a proof-of-concept study with 20 non-musicians (Experiment 1). To validate the results of the MLP procedure, less widespread than standard staircase methods, three perceptual tasks of the battery (duration discrimination, anisochrony detection with tones, and with music) were further tested in a second group of non-musicians using 2 down / 1 up and 3 down / 1 up staircase paradigms (n = 24) (Experiment 2). The results show that the timing profiles provided by BAASTA allow to detect cases of timing/rhythm disorders. In addition, perceptual thresholds yielded by the MLP algorithm, although generally comparable to the results provided by standard staircase, tend to be slightly lower. In sum, BAASTA provides a comprehensive battery to test perceptual and sensorimotor timing skills, and to detect timing/rhythm deficits.
  • Dalla Bella, S., Janaqi, S., Benoit, C.-E., Farrugia, N., Bégel, V., Verga, L., Harding, E. E., & Kotz, S. A. (2024). Unravelling individual rhythmic abilities using machine learning. Scientific Reports, 14(1): 1135. doi:10.1038/s41598-024-51257-7.

    Abstract

    Humans can easily extract the rhythm of a complex sound, like music, and move to its regular beat, like in dance. These abilities are modulated by musical training and vary significantly in untrained individuals. The causes of this variability are multidimensional and typically hard to grasp in single tasks. To date we lack a comprehensive model capturing the rhythmic fingerprints of both musicians and non-musicians. Here we harnessed machine learning to extract a parsimonious model of rhythmic abilities, based on behavioral testing (with perceptual and motor tasks) of individuals with and without formal musical training (n = 79). We demonstrate that variability in rhythmic abilities and their link with formal and informal music experience can be successfully captured by profiles including a minimal set of behavioral measures. These findings highlight that machine learning techniques can be employed successfully to distill profiles of rhythmic abilities, and ultimately shed light on individual variability and its relationship with both formal musical training and informal musical experiences.

    Additional information

    supplementary materials
  • Dediu, D. (2017). From biology to language change and diversity. In N. J. Enfield (Ed.), Dependencies in language: On the causal ontology of linguistics systems (pp. 39-52). Berlin: Language Science Press.
  • Dediu, D., Janssen, R., & Moisik, S. R. (2017). Language is not isolated from its wider environment: Vocal tract influences on the evolution of speech and language. Language and Communication, 54, 9-20. doi:10.1016/j.langcom.2016.10.002.

    Abstract

    Language is not a purely cultural phenomenon somehow isolated from its wider environment, and we may only understand its origins and evolution by seriously considering its embedding in this environment as well as its multimodal nature. By environment here we understand other aspects of culture (such as communication technology, attitudes towards language contact, etc.), of the physical environment (ultraviolet light incidence, air humidity, etc.), and of the biological infrastructure for language and speech. We are specifically concerned in this paper with the latter, in the form of the biases, constraints and affordances that the anatomy and physiology of the vocal tract create on speech and language. In a nutshell, our argument is that (a) there is an under-appreciated amount of inter-individual variation in vocal tract (VT) anatomy and physiology, (b) variation that is non-randomly distributed across populations, and that (c) results in systematic differences in phonetics and phonology between languages. Relevant differences in VT anatomy include the overall shape of the hard palate, the shape of the alveolar ridge, the relationship between the lower and upper jaw, to mention just a few, and our data offer a new way to systematically explore such differences and their potential impact on speech. These differences generate very small biases that nevertheless can be amplified by the repeated use and transmission of language, affecting language diachrony and resulting in cross-linguistic synchronic differences. Moreover, the same type of biases and processes might have played an essential role in the emergence and evolution of language, and might allow us a glimpse into the speech and language of extinct humans by, for example, reconstructing the anatomy of parts of their vocal tract from the fossil record and extrapolating the biases we find in present-day humans.
  • Defina, R., Allen, S. E. M., Davidson, L., Hellwig, B., Kelly, B. F., & Kidd, E. (2023). Sketch Acquisition Manual (SAM), Part I: The sketch corpus. Language Documentation and Conservation Special Publication, 28, 5-38. Retrieved from https://hdl.handle.net/10125/74719.

    Abstract

    This paper presents the first part of a guide for documenting and describing child language, child-directed language and socialization patterns in diverse languages and cultures. The guide is intended for anyone interested in working across child language and language documentation,
    including, for example, field linguists and language documenters, community language workers, child language researchers or graduate students. We assume some basic familiarity with language documentation principles and methods, and, based on this, provide step-by-step suggestions for
    collecting, analyzing and presenting child data. This first part of the guide focuses on constructing a sketch corpus that consists of minimally five hours of annotated and archived data and which documents communicative practices of children between the ages of 2 and 4.
  • Defina, R., Allen, S. E. M., Davidson, L., Hellwig, B., Kelly, B. F., & Kidd, E. (2023). Sketch Acquisition Manual (SAM), Part II: The acquisition sketch. Language Documentation and Conservation Special Publication, 28, 39-86. Retrieved from https://hdl.handle.net/10125/74720.

    Abstract

    This paper presents the second part of a guide for documenting and describing child language, child-directed language and socialization patterns in diverse languages and cultures. The guide is intended for anyone interested in working across child language and language documentation,
    including, for example, field linguists and language documenters, community language workers, child language researchers or graduate students. We assume some basic familiarity with language documentation principles and methods, and, based on this, provide step-by-step suggestions for
    collecting, analyzing and presenting child data. This second part of the guide focuses on developing a child language acquisition sketch. It takes the sketch corpus as its basis (which was introduced in the first part of this guide), and presents a model for analyzing and describing the corpus data.
  • Defina, R., Dingemanse, M., & Van Putten, S. (2024). Linguistic fieldwork as team science. In E. Aboh (Ed.), Predication in African Languages (pp. 20-42). Amsterdam: John Benjamins. doi:10.1075/slcs.235.01def.

    Abstract


    Linguistic fieldwork is increasingly moving forward from the traditional model of lone fieldworker with a notebook to collaborative projects with key roles for native speakers and other experts and involving the use of different kinds of stimulus-based elicitation methods as well as extensive video documentation. Several cohorts of colleagues and students have been influenced by this inclusive and interdisciplinary view of linguistic fieldwork. We describe the challenges and benefits of doing multi-methods collaborative fieldwork. As linguistics inevitably moves into the direction of multiple methods, interdisciplinarity and team science, now is the time to reflect critically on how best to contribute to a cumulative science of language.
  • Den Os, E., & Boves, L. (2004). Natural multimodal interaction for design applications. In P. Cunningham (Ed.), Adoption and the knowledge economy (pp. 1403-1410). Amsterdam: IOS Press.
  • Deriziotis, P., & Fisher, S. E. (2017). Speech and Language: Translating the Genome. Trends in Genetics, 33(9), 642-656. doi:10.1016/j.tig.2017.07.002.

    Abstract

    Investigation of the biological basis of human speech and language is being transformed by developments in molecular technologies, including high-throughput genotyping and next-generation sequencing of whole genomes. These advances are shedding new light on the genetic architecture underlying language-related disorders (speech apraxia, specific language impairment, developmental dyslexia) as well as that contributing to variation in relevant skills in the general population. We discuss how state-of-the-art methods are uncovering a range of genetic mechanisms, from rare mutations of large effect to common polymorphisms that increase risk in a subtle way, while converging on neurogenetic pathways that are shared between distinct disorders. We consider the future of the field, highlighting the unusual challenges and opportunities associated with studying genomics of language-related traits.
  • Devaraju, K., Miskinyte, G., Hansen, M. G., Monni, E., Tornero, D., Woods, N. B., Bengzon, J., Ahlenius, H., Lindvall, O., & Kokaia, Z. (2017). Direct conversion of human fibroblasts to functional excitatory cortical neurons integrating into human neural networks. Stem Cell Research & Therapy, 8: 207. doi:10.1186/s13287-017-0658-3.

    Abstract

    Background: Human fibroblasts can be directly converted to several subtypes of neurons, but cortical projection neurons have not been generated. Methods: Here we screened for transcription factor combinations that could potentially convert human fibroblasts to functional excitatory cortical neurons. The induced cortical (iCtx) cells were analyzed for cortical neuronal identity using immunocytochemistry, single-cell quantitative polymerase chain reaction (qPCR), electrophysiology, and their ability to integrate into human neural networks in vitro and ex vivo using electrophysiology and rabies virus tracing. Results: We show that a combination of three ranscription fact ors, BRN2, MYT1L, and FEZF2, have the ability to directly convert human fibroblasts to functional excitatory cortical neurons. The conversion efficiency was increased to about 16% by treatment with small molecules and microRNAs. The iCtx cells exhibited electrophysiological properties of functional neurons, had pyramidal-like cell morphology, and expressed key cortical projection neuronal markers. Single-cell analysis of iCtx cells revealed a complex gene expression profile, a subpopulation of them displaying a molecular signature closely resembling that of human fetal primary cortical neurons. The iCtx cells received synaptic inputs from co-cultured human fetal primary cortical neurons, contained spines, and expressed the postsyna ptic excitatory scaffold protein PSD95. When transplanted ex vivo to organotypic cultures of adult human cerebral cortex, the iCtx cells exhibited morphological and electrophysiological properties of mature neurons, integrated structurally into the cortical tissue, and received synaptic inputs from adult human neurons. Conclusions: Our findings indicate that functional excitatory cortical neurons, generated here for the first time by direct conversion of human somatic cells, have the capacity for synaptic integration into adult human cortex.
  • Dideriksen, C., Christiansen, M. H., Tylén, K., Dingemanse, M., & Fusaroli, R. (2023). Quantifying the interplay of conversational devices in building mutual understanding. Journal of Experimental Psychology: General, 152(3), 864-889. doi:10.1037/xge0001301.

    Abstract

    Humans readily engage in idle chat and heated discussions and negotiate tough joint decisions without ever having to think twice about how to keep the conversation grounded in mutual understanding. However, current attempts at identifying and assessing the conversational devices that make this possible are fragmented across disciplines and investigate single devices within single contexts. We present a comprehensive conceptual framework to investigate conversational devices, their relations, and how they adjust to contextual demands. In two corpus studies, we systematically test the role of three conversational devices: backchannels, repair, and linguistic entrainment. Contrasting affiliative and task-oriented conversations within participants, we find that conversational devices adaptively adjust to the increased need for precision in the latter: We show that low-precision devices such as backchannels are more frequent in affiliative conversations, whereas more costly but higher-precision mechanisms, such as specific repairs, are more frequent in task-oriented conversations. Further, task-oriented conversations involve higher complementarity of contributions in terms of the content and perspective: lower semantic entrainment and less frequent (but richer) lexical and syntactic entrainment. Finally, we show that the observed variations in the use of conversational devices are potentially adaptive: pairs of interlocutors that show stronger linguistic complementarity perform better across the two tasks. By combining motivated comparisons of several conversational contexts and theoretically informed computational analyses of empirical data the present work lays the foundations for a comprehensive conceptual framework for understanding the use of conversational devices in dialogue.
  • Dideriksen, C., Christiansen, M. H., Dingemanse, M., Højmark‐Bertelsen, M., Johansson, C., Tylén, K., & Fusaroli, R. (2023). Language‐specific constraints on conversation: Evidence from Danish and Norwegian. Cognitive Science, 47(11): e13387. doi:10.1111/cogs.13387.

    Abstract

    Establishing and maintaining mutual understanding in everyday conversations is crucial. To do so, people employ a variety of conversational devices, such as backchannels, repair, and linguistic entrainment. Here, we explore whether the use of conversational devices might be influenced by cross-linguistic differences in the speakers’ native language, comparing two matched languages—Danish and Norwegian—differing primarily in their sound structure, with Danish being more opaque, that is, less acoustically distinguished. Across systematically manipulated conversational contexts, we find that processes supporting mutual understanding in conversations vary with external constraints: across different contexts and, crucially, across languages. In accord with our predictions, linguistic entrainment was overall higher in Danish than in Norwegian, while backchannels and repairs presented a more nuanced pattern. These findings are compatible with the hypothesis that native speakers of Danish may compensate for its opaque sound structure by adopting a top-down strategy of building more conversational redundancy through entrainment, which also might reduce the need for repairs. These results suggest that linguistic differences might be met by systematic changes in language processing and use. This paves the way for further cross-linguistic investigations and critical assessment of the interplay between cultural and linguistic factors on the one hand and conversational dynamics on the other.
  • Dikshit, A. P., Mishra, C., Das, D., & Parashar, S. (2023). Frequency and temperature-dependence ZnO based fractional order capacitor using machine learning. Materials Chemistry and Physics, 307: 128097. doi:10.1016/j.matchemphys.2023.128097.

    Abstract

    This paper investigates the fractional order behavior of ZnO ceramics at different frequencies. ZnO ceramic was prepared by high energy ball milling technique (HEBM) sintered at 1300℃ to study the frequency response properties. The frequency response properties (impedance and phase
    angles) were examined by analyzing through impedance analyzer (100 Hz - 1 MHz). Constant phase angles (84°-88°) were obtained at low temperature ranges (25 ℃-125 ℃). The structural and
    morphological composition of the ZnO ceramic was investigated using X-ray diffraction techniques and FESEM. Raman spectrum was studied to understand the different modes of ZnO ceramics. Machine learning (polynomial regression) models were trained on a dataset of 1280
    experimental values to accurately predict the relationship between frequency and temperature with respect to impedance and phase values of the ZnO ceramic FOC. The predicted impedance values were found to be in good agreement (R2 ~ 0.98, MSE ~ 0.0711) with the experimental results.
    Impedance values were also predicted beyond the experimental frequency range (at 50 Hz and 2 MHz) for different temperatures (25℃ - 500℃) and for low temperatures (10°, 15° and 20℃)
    within the frequency range (100Hz - 1MHz).

    Files private

    Request files
  • Dikshit, A. P., Das, D., Samal, R. R., Parashar, K., Mishra, C., & Parashar, S. (2024). Optimization of (Ba1-xCax)(Ti0.9Sn0.1)O3 ceramics in X-band using Machine Learning. Journal of Alloys and Compounds, 982: 173797. doi:10.1016/j.jallcom.2024.173797.

    Abstract

    Developing efficient electromagnetic interference shielding materials has become significantly important in present times. This paper reports a series of (Ba1-xCax)(Ti0.9Sn0.1)O3 (BCTS) ((x =0, 0.01, 0.05, & 0.1)ceramics synthesized by conventional method which were studied for electromagnetic interference shielding (EMI) applications in X-band (8-12.4 GHz). EMI shielding properties and all S parameters (S11 & S12) of BCTS ceramic pellets were measured in the frequency range (8-12.4 GHz) using a Vector Network Analyser (VNA). The BCTS ceramic pellets for x = 0.05 showed maximum total effective shielding of 46 dB indicating good shielding behaviour for high-frequency applications. However, the development of lead-free ceramics with different concentrations usually requires iterative experiments resulting in, longer development cycles and higher costs. To address this, we used a machine learning (ML) strategy to predict the EMI shielding for different concentrations and experimentally verify the concentration predicted to give the best EMI shielding. The ML model predicted BCTS ceramics with concentration (x = 0.06, 0.07, 0.08, and 0.09) to have higher shielding values. On experimental verification, a shielding value of 58 dB was obtained for x = 0.08, which was significantly higher than what was obtained experimentally before applying the ML approach. Our results show the potential of using ML in accelerating the process of optimal material development, reducing the need for repeated experimental measures significantly.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Ding, R., Ten Oever, S., & Martin, A. E. (2024). Delta-band activity underlies referential meaning representation during pronoun resolution. Journal of Cognitive Neuroscience, 36(7), 1472-1492. doi:10.1162/jocn_a_02163.

    Abstract

    Human language offers a variety of ways to create meaning, one of which is referring to entities, objects, or events in the world. One such meaning maker is understanding to whom or to what a pronoun in a discourse refers to. To understand a pronoun, the brain must access matching entities or concepts that have been encoded in memory from previous linguistic context. Models of language processing propose that internally stored linguistic concepts, accessed via exogenous cues such as phonological input of a word, are represented as (a)synchronous activities across a population of neurons active at specific frequency bands. Converging evidence suggests that delta band activity (1–3 Hz) is involved in temporal and representational integration during sentence processing. Moreover, recent advances in the neurobiology of memory suggest that recollection engages neural dynamics similar to those which occurred during memory encoding. Integrating from these two research lines, we here tested the hypothesis that neural dynamic patterns, especially in delta frequency range, underlying referential meaning representation, would be reinstated during pronoun resolution. By leveraging neural decoding techniques (i.e., representational similarity analysis) on a magnetoencephalogram data set acquired during a naturalistic story-listening task, we provide evidence that delta-band activity underlies referential meaning representation. Our findings suggest that, during spoken language comprehension, endogenous linguistic representations such as referential concepts may be proactively retrieved and represented via activation of their underlying dynamic neural patterns.
  • Dingemans, A. J. M., Hinne, M., Truijen, K. M. G., Goltstein, L., Van Reeuwijk, J., De Leeuw, N., Schuurs-Hoeijmakers, J., Pfundt, R., Diets, I. J., Den Hoed, J., De Boer, E., Coenen-Van der Spek, J., Jansen, S., Van Bon, B. W., Jonis, N., Ockeloen, C. W., Vulto-van Silfhout, A. T., Kleefstra, T., Koolen, D. A., Campeau, P. M. and 13 moreDingemans, A. J. M., Hinne, M., Truijen, K. M. G., Goltstein, L., Van Reeuwijk, J., De Leeuw, N., Schuurs-Hoeijmakers, J., Pfundt, R., Diets, I. J., Den Hoed, J., De Boer, E., Coenen-Van der Spek, J., Jansen, S., Van Bon, B. W., Jonis, N., Ockeloen, C. W., Vulto-van Silfhout, A. T., Kleefstra, T., Koolen, D. A., Campeau, P. M., Palmer, E. E., Van Esch, H., Lyon, G. J., Alkuraya, F. S., Rauch, A., Marom, R., Baralle, D., Van der Sluijs, P. J., Santen, G. W. E., Kooy, R. F., Van Gerven, M. A. J., Vissers, L. E. L. M., & De Vries, B. B. A. (2023). PhenoScore quantifies phenotypic variation for rare genetic diseases by combining facial analysis with other clinical features using a machine-learning framework. Nature Genetics, 55, 1598-1607. doi:10.1038/s41588-023-01469-w.

    Abstract

    Several molecular and phenotypic algorithms exist that establish genotype–phenotype correlations, including facial recognition tools. However, no unified framework that investigates both facial data and other phenotypic data directly from individuals exists. We developed PhenoScore: an open-source, artificial intelligence-based phenomics framework, combining facial recognition technology with Human Phenotype Ontology data analysis to quantify phenotypic similarity. Here we show PhenoScore’s ability to recognize distinct phenotypic entities by establishing recognizable phenotypes for 37 of 40 investigated syndromes against clinical features observed in individuals with other neurodevelopmental disorders and show it is an improvement on existing approaches. PhenoScore provides predictions for individuals with variants of unknown significance and enables sophisticated genotype–phenotype studies by testing hypotheses on possible phenotypic (sub)groups. PhenoScore confirmed previously known phenotypic subgroups caused by variants in the same gene for SATB1, SETBP1 and DEAF1 and provides objective clinical evidence for two distinct ADNP-related phenotypes, already established functionally.

    Additional information

    supplementary information
  • Dingemanse, M., Liesenfeld, A., Rasenberg, M., Albert, S., Ameka, F. K., Birhane, A., Bolis, D., Cassell, J., Clift, R., Cuffari, E., De Jaegher, H., Dutilh Novaes, C., Enfield, N. J., Fusaroli, R., Gregoromichelaki, E., Hutchins, E., Konvalinka, I., Milton, D., Rączaszek-Leonardi, J., Reddy, V. and 8 moreDingemanse, M., Liesenfeld, A., Rasenberg, M., Albert, S., Ameka, F. K., Birhane, A., Bolis, D., Cassell, J., Clift, R., Cuffari, E., De Jaegher, H., Dutilh Novaes, C., Enfield, N. J., Fusaroli, R., Gregoromichelaki, E., Hutchins, E., Konvalinka, I., Milton, D., Rączaszek-Leonardi, J., Reddy, V., Rossano, F., Schlangen, D., Seibt, J., Stokoe, E., Suchman, L. A., Vesper, C., Wheatley, T., & Wiltschko, M. (2023). Beyond single-mindedness: A figure-ground reversal for the cognitive sciences. Cognitive Science, 47(1): e13230. doi:10.1111/cogs.13230.

    Abstract

    A fundamental fact about human minds is that they are never truly alone: all minds are steeped in situated interaction. That social interaction matters is recognised by any experimentalist who seeks to exclude its influence by studying individuals in isolation. On this view, interaction complicates cognition. Here we explore the more radical stance that interaction co-constitutes cognition: that we benefit from looking beyond single minds towards cognition as a process involving interacting minds. All around the cognitive sciences, there are approaches that put interaction centre stage. Their diverse and pluralistic origins may obscure the fact that collectively, they harbour insights and methods that can respecify foundational assumptions and fuel novel interdisciplinary work. What might the cognitive sciences gain from stronger interactional foundations? This represents, we believe, one of the key questions for the future. Writing as a multidisciplinary collective assembled from across the classic cognitive science hexagon and beyond, we highlight the opportunity for a figure-ground reversal that puts interaction at the heart of cognition. The interactive stance is a way of seeing that deserves to be a key part of the conceptual toolkit of cognitive scientists.
  • Dingemanse, M. (2017). Brain-to-brain interfaces and the role of language in distributing agency. In N. J. Enfield, & P. Kockelman (Eds.), Distributed Agency (pp. 59-66). Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190457204.003.0007.

    Abstract

    Brain-to-brain interfaces, in which brains are physically connected without the intervention of language, promise new ways of collaboration and communication between humans. I examine the narrow view of language implicit in current conceptions of brain-to-brain interfaces and put forward a constructive alternative, stressing the role of language in organising joint agency. Two features of language stand out as crucial: its selectivity, which provides people with much-needed filters between public words and private worlds; and its negotiability, which provides people with systematic opportunities for calibrating understanding and expressing consent and dissent. Without these checks and balances, brain-to-brain interfaces run the risk of reducing people to the level of amoeba in a slime mold; with them, they may mature to become useful extensions of human agency

Share this page