Publications

Displaying 301 - 400 of 1040
  • Hahn, L. E., Benders, T., Fikkert, P., & Snijders, T. M. (2021). Infants’ implicit rhyme perception in child songs and its relationship with vocabulary. Frontiers in Psychology, 12: 680882. doi:10.3389/fpsyg.2021.680882.

    Abstract

    Rhyme perception is an important predictor for future literacy. Assessing rhyme
    abilities, however, commonly requires children to make explicit rhyme judgements on
    single words. Here we explored whether infants already implicitly process rhymes in
    natural rhyming contexts (child songs) and whether this response correlates with later
    vocabulary size. In a passive listening ERP study, 10.5 month-old Dutch infants were
    exposed to rhyming and non-rhyming child songs. Two types of rhyme effects were
    analysed: (1) ERPs elicited by the first rhyme occurring in each song (rhyme sensitivity)
    and (2) ERPs elicited by rhymes repeating after the first rhyme in each song (rhyme
    repetition). Only for the latter a tentative negativity for rhymes from 0 to 200 ms
    after the onset of the rhyme word was found. This rhyme repetition effect correlated
    with productive vocabulary at 18 months-old, but not with any other vocabulary
    measure (perception at 10.5 or 18 months-old). While awaiting future replication, the
    study indicates precursors of phonological awareness already during infancy and with
    ecologically valid linguistic stimuli.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2018). Infants' sensitivity to rhyme in songs. Infant Behavior and Development, 52, 130-139. doi:10.1016/j.infbeh.2018.07.002.

    Abstract

    Children’s songs often contain rhyming words at phrase endings. In this study, we investigated whether infants can already recognize this phonological pattern in songs. Earlier studies using lists of spoken words were equivocal on infants’ spontaneous processing of rhymes (Hayes, Slater, & Brown, 2000; Jusczyk, Goodman, & Baumann, 1999). Songs, however, constitute an ecologically valid rhyming stimulus, which could allow for spontaneous processing of this phonological pattern in infants. Novel children’s songs with rhyming and non-rhyming lyrics using pseudo-words were presented to 35 9-month-old Dutch infants using the Headturn Preference Procedure. Infants on average listened longer to the non-rhyming songs, with around half of the infants however exhibiting a preference for the rhyming songs. These results highlight that infants have the processing abilities to benefit from their natural rhyming input for the development of their phonological abilities.
  • Harmon, Z., & Kapatsinski, V. (2021). A theory of repetition and retrieval in language production. Psychological Review, 128, 1112-1144. doi:10.1037/rev0000305.

    Abstract

    Repetition appears to be part of error correction and action preparation in all domains that involve producing an action sequence. The present work contends that the ubiquity of repetition is due to its role in resolving a problem inherent to planning and retrieval of action sequences: the Problem of Retrieval. Repetitions occur when the production to perform next is not activated enough to be executed. Repetitions are helpful in this situation because the repeated action sequence activates the likely continuation. We model a corpus of natural speech using a recurrent network, with words as units of production. We show that repeated material makes upcoming words more predictable, especially when more than one word is repeated. Speakers are argued to produce multiword repetitions by using backward associations to reactivate recently produced words. The existence of multiword repetitions means that speakers must decide where to reinitiate execution from. We show that production restarts from words that have seldom occurred in a predictive preceding-word context and have often occurred utterance-initially. These results are explained by competition between preceding-context and top-down cues over the course of language learning. The proposed theory improves on structural accounts of repetition disfluencies, and integrates repetition disfluencies in language production with repetitions observed in other domains of skilled action.
  • Hartung, F., Wang, Y., Mak, M., Willems, R. M., & Chatterjee, A. (2021). Aesthetic appraisals of literary style and emotional intensity in narrative engagement are neurally dissociable. Communications Biology, 4: 1401. doi:10.1038/s42003-021-02926-0.

    Abstract

    Humans are deeply affected by stories, yet it is unclear how. In this study, we explored two aspects of aesthetic experiences during narrative engagement - literariness and narrative fluctuations in appraised emotional intensity. Independent ratings of literariness and emotional intensity of two literary stories were used to predict blood-oxygen-level-dependent signal changes in 52 listeners from an existing fMRI dataset. Literariness was associated with increased activation in brain areas linked to semantic integration (left angular gyrus, supramarginal gyrus, and precuneus), and decreased activation in bilateral middle temporal cortices, associated with semantic representations and word memory. Emotional intensity correlated with decreased activation in a bilateral frontoparietal network that is often associated with controlled attention. Our results confirm a neural dissociation in processing literary form and emotional content in stories and generate new questions about the function of and interaction between attention, social cognition, and semantic systems during literary engagement and aesthetic experiences.
  • Hasson, U., Egidi, G., Marelli, M., & Willems, R. M. (2018). Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension. Cognition, 180(1), 135-157. doi:10.1016/j.cognition.2018.06.018.

    Abstract

    Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
  • Haun, D. B. M. (2003). What's so special about spatial cognition. De Psychonoom, 18, 3-4.
  • Havron, N., Raviv, L., & Arnon, I. (2018). Literate and preliterate children show different learning patterns in an artificial language learning task. Journal of Cultural Cognitive Science, 2, 21-33. doi:10.1007/s41809-018-0015-9.

    Abstract

    Literacy affects many aspects of cognitive and linguistic processing. Among them, it increases the salience of words as units of linguistic processing. Here, we explored the impact of literacy acquisition on children’s learning of an artifical language. Recent accounts of L1–L2 differences relate adults’ greater difficulty with language learning to their smaller reliance on multiword units. In particular, multiword units are claimed to be beneficial for learning opaque grammatical relations like grammatical gender. Since literacy impacts the reliance on words as units of processing, we ask if and how acquiring literacy may change children’s language-learning results. We looked at children’s success in learning novel noun labels relative to their success in learning article-noun gender agreement, before and after learning to read. We found that preliterate first graders were better at learning agreement (larger units) than at learning nouns (smaller units), and that the difference between the two trial types significantly decreased after these children acquired literacy. In contrast, literate third graders were as good in both trial types. These findings suggest that literacy affects not only language processing, but also leads to important differences in language learning. They support the idea that some of children’s advantage in language learning comes from their previous knowledge and experience with language—and specifically, their lack of experience with written texts.
  • Hayano, K. (2003). Self-presentation as a face-threatening act: A comparative study of self-oriented topic introduction in English and Japanese. Veritas, 24, 45-58.
  • Healthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M. and 67 moreHealthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M., Füllbrunn, S., Gayet, S., Van Gelder, M. M. H. J., Van Gerven, M., Geurts, S., Greven, C. U., Groefsema, M., Haak, K., Hagoort, P., Hartman, Y., Van der Heijden, B., Hermans, E., Heuvelmans, V., Hintz, F., Den Hollander, J., Hulsman, A. M., Idesis, S., Jaeger, M., Janse, E., Janzing, J., Kessels, R. P. C., Karremans, J. C., De Kleijn, W., Klein, M., Klumpers, F., Kohn, N., Korzilius, H., Krahmer, B., De Lange, F., Van Leeuwen, J., Liu, H., Luijten, M., Manders, P., Manevska, K., Marques, J. P., Matthews, J., McQueen, J. M., Medendorp, P., Melis, R., Meyer, A. S., Oosterman, J., Overbeek, L., Peelen, M., Popma, J., Postma, G., Roelofs, K., Van Rossenberg, Y. G. T., Schaap, G., Scheepers, P., Selen, L., Starren, M., Swinkels, D. W., Tendolkar, I., Thijssen, D., Timmerman, H., Tutunji, R., Tuladhar, A., Veling, H., Verhagen, M., Verkroost, J., Vink, J., Vriezekolk, V., Vrijsen, J., Vyrastekova, J., Van der Wal, S., Willems, R. M., & Willemsen, A. (2021). Protocol of the Healthy Brain Study: An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context. PLoS One, 16(12): e0260952. doi:10.1371/journal.pone.0260952.

    Abstract

    The endeavor to understand the human brain has seen more progress in the last few decades than in the previous two millennia. Still, our understanding of how the human brain relates to behavior in the real world and how this link is modulated by biological, social, and environmental factors is limited. To address this, we designed the Healthy Brain Study (HBS), an interdisciplinary, longitudinal, cohort study based on multidimensional, dynamic assessments in both the laboratory and the real world. Here, we describe the rationale and design of the currently ongoing HBS. The HBS is examining a population-based sample of 1,000 healthy participants (age 30-39) who are thoroughly studied across an entire year. Data are collected through cognitive, affective, behavioral, and physiological testing, neuroimaging, bio-sampling, questionnaires, ecological momentary assessment, and real-world assessments using wearable devices. These data will become an accessible resource for the scientific community enabling the next step in understanding the human brain and how it dynamically and individually operates in its bio-social context. An access procedure to the collected data and bio-samples is in place and published on https://www.healthybrainstudy.nl/en/data-and-methods.

    https://www.trialregister.nl/trial/7955

    Additional information

    supplementary material
  • Hebebrand, J., Peters, T., Schijven, D., Hebebrand, M., Grasemann, C., Winkler, T. W., Heid, I. M., Antel, J., Föcker, M., Tegeler, L., Brauner, L., Adan, R. A., Luykx, J. J., Correll, C. U., König, I. R., Hinney, A., & Libuda, L. (2018). The role of genetic variation of human metabolism for BMI, mental traits and mental disorders. Molecular Metabolism, 12, 1-11. doi:10.1016/j.molmet.2018.03.015.

    Abstract

    Objective
    The aim was to assess whether loci associated with metabolic traits also have a significant role in BMI and mental traits/disorders
    Methods
    We first assessed the number of single nucleotide polymorphisms (SNPs) with genome-wide significance for human metabolism (NHGRI-EBI Catalog). These 516 SNPs (216 independent loci) were looked-up in genome-wide association studies for association with body mass index (BMI) and the mental traits/disorders educational attainment, neuroticism, schizophrenia, well-being, anxiety, depressive symptoms, major depressive disorder, autism-spectrum disorder, attention-deficit/hyperactivity disorder, Alzheimer's disease, bipolar disorder, aggressive behavior, and internalizing problems. A strict significance threshold of p < 6.92 × 10−6 was based on the correction for 516 SNPs and all 14 phenotypes, a second less conservative threshold (p < 9.69 × 10−5) on the correction for the 516 SNPs only.
    Results
    19 SNPs located in nine independent loci revealed p-values < 6.92 × 10−6; the less strict criterion was met by 41 SNPs in 24 independent loci. BMI and schizophrenia showed the most pronounced genetic overlap with human metabolism with three loci each meeting the strict significance threshold. Overall, genetic variation associated with estimated glomerular filtration rate showed up frequently; single metabolite SNPs were associated with more than one phenotype. Replications in independent samples were obtained for BMI and educational attainment.
    Conclusions
    Approximately 5–10% of the regions involved in the regulation of blood/urine metabolite levels seem to also play a role in BMI and mental traits/disorders and related phenotypes. If validated in metabolomic studies of the respective phenotypes, the associated blood/urine metabolites may enable novel preventive and therapeutic strategies.
  • Heesen, R., Fröhlich, M., Sievers, C., Woensdregt, M., & Dingemanse, M. (2022). Coordinating social action: A primer for the cross-species investigation of communicative repair. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377(1859): 20210110. doi:10.1098/rstb.2021.0110.

    Abstract

    Human joint action is inherently cooperative, manifested in the collaborative efforts of participants to minimize communicative trouble through interactive repair. Although interactive repair requires sophisticated cognitive abilities,
    it can be dissected into basic building blocks shared with non-human animal species. A review of the primate literature shows that interactionally contingent signal sequences are at least common among species of nonhuman great apes, suggesting a gradual evolution of repair. To pioneer a cross-species assessment of repair this paper aims at (i) identifying necessary precursors of human interactive repair; (ii) proposing a coding framework for its comparative study in humans and non-human species; and (iii) using this framework to analyse examples of interactions of humans (adults/children) and non-human great apes. We hope this paper will serve as a primer for cross-species comparisons of communicative breakdowns and how they are repaired.
  • Heidlmayr, K., Ferragne, E., & Isel, F. (2021). Neuroplasticity in the phonological system: The PMN and the N400 as markers for the perception of non-native phonemic contrasts by late second language learners. Neuropsychologia, 156: 107831. doi:10.1016/j.neuropsychologia.2021.107831.

    Abstract

    Second language (L2) learners frequently encounter persistent difficulty in perceiving certain non-native sound contrasts, i.e., a phenomenon called “phonological deafness”. However, if extensive L2 experience leads to neuroplastic changes in the phonological system, then the capacity to discriminate non-native phonemic contrasts should progressively improve. Such perceptual changes should be attested by modifications at the neurophysiological level. We designed an EEG experiment in which the listeners’ perceptual capacities to discriminate second language phonemic contrasts influence the processing of lexical-semantic violations. Semantic congruency of critical words in a sentence context was driven by a phonemic contrast that was unique to the L2, English (e.g.,/ɪ/-/i:/, ship – sheep). Twenty-eight young adult native speakers of French with intermediate proficiency in English listened to sentences that contained either a semantically congruent or incongruent critical word (e.g., The anchor of the ship/*sheep was let down) while EEG was recorded. Three ERP effects were found to relate to increasing L2 proficiency: (1) a left frontal auditory N100 effect, (2) a smaller fronto-central phonological mismatch negativity (PMN) effect and (3) a semantic N400 effect. No effect of proficiency was found on oscillatory markers. The current findings suggest that neuronal plasticity in the human brain allows for the late acquisition of even hard-wired linguistic features such as the discrimination of phonemic contrasts in a second language. This is the first time that behavioral and neurophysiological evidence for the critical role of neural plasticity underlying L2 phonological processing and its interdependence with semantic processing has been provided. Our data strongly support the idea that pieces of information from different levels of linguistic processing (e.g., phonological, semantic) strongly interact and influence each other during online language processing.

    Additional information

    supplementary material
  • Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P., & De Lange, F. P. (2022). A hierarchy of linguistic predictions during natural language comprehension. Proceedings of the National Academy of Sciences of the United States of America, 119(32): e2201968119. doi:10.1073/pnas.2201968119.

    Abstract

    Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input. However, the role of prediction in language processing remains disputed, with disagreement about both the ubiquity and representational nature of predictions. Here, we address both issues by analyzing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions. First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable neural signatures of predictions about syntactic category (parts of speech), phonemes, and semantics. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.

    Additional information

    supporting information
  • Henry, M. J., Cook, P. F., de Reus, K., Nityananda, V., Rouse, A. A., & Kotz, S. A. (2021). An ecological approach to measuring synchronization abilities across the animal kingdom. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200336. doi:10.1098/rstb.2020.0336.

    Abstract

    In this perspective paper, we focus on the study of synchronization abilities across the animal kingdom. We propose an ecological approach to studying nonhuman animal synchronization that begins from observations about when, how and why an animal might synchronize spontaneously with natural environmental rhythms. We discuss what we consider to be the most important, but thus far largely understudied, temporal, physical, perceptual and motivational constraints that must be taken into account when designing experiments to test synchronization in nonhuman animals. First and foremost, different species are likely to be sensitive to and therefore capable of synchronizing at different timescales. We also argue that it is fruitful to consider the latent flexibility of animal synchronization. Finally, we discuss the importance of an animal's motivational state for showcasing synchronization abilities. We demonstrate that the likelihood that an animal can successfully synchronize with an environmental rhythm is context-dependent and suggest that the list of species capable of synchronization is likely to grow when tested with ecologically honest, species-tuned experiments.
  • Heritage, J., & Stivers, T. (1999). Online commentary in acute medical visits: A method of shaping patient expectations. Social Science and Medicine, 49(11), 1501-1517. doi:10.1016/S0277-9536(99)00219-1.
  • Hersh, T. A., Dimond, A. L., Ruth, B. A., Lupica, N. V., Bruce, J. C., Kelley, J. M., King, B. L., & Lutton, B. V. (2018). A role for the CXCR4-CXCL12 axis in the little skate, Leucoraja erinacea. American Journal of Physiology-Regulatory, Integrative and Comparative Physiology, 315, R218-R229. doi:10.1152/ajpregu.00322.2017.

    Abstract

    The interaction between C-X-C chemokine receptor type 4 (CXCR4) and its cognate ligand C-X-C motif chemokine ligand 12 (CXCL12) plays a critical role in regulating hematopoietic stem cell activation and subsequent cellular mobilization. Extensive studies of these genes have been conducted in mammals, but much less is known about the expression and function of CXCR4 and CXCL12 in non-mammalian vertebrates. In the present study, we identify simultaneous expression of CXCR4 and CXCL12 orthologs in the epigonal organ (the primary hematopoietic tissue) of the little skate, Leucoraja erinacea. Genetic and phylogenetic analyses were functionally supported by significant mobilization of leukocytes following administration of Plerixafor, a CXCR4 antagonist and clinically important drug. Our results provide evidence that, as in humans, Plerixafor disrupts CXCR4/CXCL12 binding in the little skate, facilitating release of leukocytes into the bloodstream. Our study illustrates the value of the little skate as a model organism, particularly in studies of hematopoiesis and potentially for preclinical research on hematological and vascular disorders.

    Files private

    Request files
  • Hersh, T. A., Gero, S., Rendell, L., & Whitehead, H. (2021). Using identity calls to detect structure in acoustic datasets. Methods in Ecology and Evolution, 12(9), 1668-1678. doi:10.1111/2041-210X.13644.

    Abstract

    Acoustic analyses can be powerful tools for illuminating structure within and between populations, especially for cryptic or difficult to access taxa. Acoustic repertoires are often compared using aggregate similarity measures across all calls of a particular type, but specific group identity calls may more clearly delineate structure in some taxa.
    2. We present a new method—the identity call method—that estimates the number of acoustically distinct subdivisions in a set of repertoires and identifies call types that characterize those subdivisions. The method uses contaminated mixture models to identify call types, assigning each call a probability of belonging to each type. Repertoires are hierarchically clustered based on similarities in call type usage, producing a dendrogram with ‘identity clades’ of repertoires and the
    ‘identity calls’ that best characterize each clade. We validated this approach using acoustic data from sperm whales, grey-breasted wood-wrens and Australian field crickets, and ran a suite of tests to assess parameter sensitivity.
    3. For all taxa, the method detected diagnostic signals (identity calls) and structure (identity clades; sperm whale subpopulations, wren subspecies and cricket species) that were consistent with past research. Some datasets were more sensitive to parameter variation than others, which may reflect real uncertainty or biological variability in the taxa examined. We recommend that users perform comparative analyses of different parameter combinations to determine which portions of the dendrogram warrant careful versus confident interpretation.
    4. The presence of group-characteristic identity calls does not necessarily mean animals perceive them as such. Fine-scale experiments like playbacks are a key next step to understand call perception and function. This method can help inform such studies by identifying calls that may be salient to animals and are good candidates for investigation or playback stimuli. For cryptic or difficult to access taxa with group-specific calls, the identity call method can aid managers in quantifying behavioural diversity and/or identifying putative structure within and between
    populations, given that acoustic data can be inexpensive and minimally invasive to collect.
  • Hersh, T. A., Gero, S., Rendell, L., Cantor, M., Weilgart, L., Amano, M., Dawson, S. M., Slooten, E., Johnson, C. M., Kerr, I., Payne, R., Rogan, A., Antunes, R., Andrews, O., Ferguson, E. L., Hom-Weaver, C. A., Norris, T. F., Barkley, Y. M., Merkens, K. P., Oleson, E. M. and 7 moreHersh, T. A., Gero, S., Rendell, L., Cantor, M., Weilgart, L., Amano, M., Dawson, S. M., Slooten, E., Johnson, C. M., Kerr, I., Payne, R., Rogan, A., Antunes, R., Andrews, O., Ferguson, E. L., Hom-Weaver, C. A., Norris, T. F., Barkley, Y. M., Merkens, K. P., Oleson, E. M., Doniol-Valcroze, T., Pilkington, J. F., Gordon, J., Fernandes, M., Guerra, M., Hickmott, L., & Whitehead, H. (2022). Evidence from sperm whale clans of symbolic marking in non-human cultures. Proceedings of the National Academy of Sciences of the United States of America, 119(37): e2201692119. doi:10.1073/pnas.2201692119.

    Abstract

    Culture, a pillar of the remarkable ecological success of humans, is increasingly recognized as a powerful force structuring nonhuman animal populations. A key gap between these two types of culture is quantitative evidence of symbolic markers—seemingly arbitrary traits that function as reliable indicators of cultural group membership to conspecifics. Using acoustic data collected from 23 Pacific Ocean locations, we provide quantitative evidence that certain sperm whale acoustic signals exhibit spatial patterns consistent with a symbolic marker function. Culture segments sperm whale populations into behaviorally distinct clans, which are defined based on dialects of stereotyped click patterns (codas). We classified 23,429 codas into types using contaminated mixture models and hierarchically clustered coda repertoires into seven clans based on similarities in coda usage; then we evaluated whether coda usage varied with geographic distance within clans or with spatial overlap between clans. Similarities in within-clan usage of both “identity codas” (coda types diagnostic of clan identity) and “nonidentity codas” (coda types used by multiple clans) decrease as space between repertoire recording locations increases. However, between-clan similarity in identity, but not nonidentity, coda usage decreases as clan spatial overlap increases. This matches expectations if sympatry is related to a measurable pressure to diversify to make cultural divisions sharper, thereby providing evidence that identity codas function as symbolic markers of clan identity. Our study provides quantitative evidence of arbitrary traits, resembling human ethnic markers, conveying cultural identity outside of humans, and highlights remarkable similarities in the distributions of human ethnolinguistic groups and sperm whale clans.
  • Hervais-Adelman, A., Kumar, U., Mishra, R., Tripathi, V., Guleria, A., Singh, J. P., & Huettig, F. (2022). How does literacy affect speech processing? Not by enhancing cortical responses to speech, but by promoting connectivity of acoustic-phonetic and graphomotor cortices. Journal of Neuroscience, 42(47), 8826-8841. doi:10.1523/JNEUROSCI.1125-21.2022.

    Abstract

    Previous research suggests that literacy, specifically learning alphabetic letter-to-phoneme mappings, modifies online speech processing, and enhances brain responses, as indexed by the blood-oxygenation level dependent signal (BOLD), to speech in auditory areas associated with phonological processing (Dehaene et al., 2010). However, alphabets are not the only orthographic systems in use in the world, and hundreds of millions of individuals speak languages that are not written using alphabets. In order to make claims that literacy per se has broad and general consequences for brain responses to speech, one must seek confirmatory evidence from non-alphabetic literacy. To this end, we conducted a longitudinal fMRI study in India probing the effect of literacy in Devanagari, an abugida, on functional connectivity and cerebral responses to speech in 91 variously literate Hindi-speaking male and female human participants. Twenty-two completely illiterate participants underwent six months of reading and writing training. Devanagari literacy increases functional connectivity between acoustic-phonetic and graphomotor brain areas, but we find no evidence that literacy changes brain responses to speech, either in cross-sectional or longitudinal analyses. These findings shows that a dramatic reconfiguration of the neurofunctional substrates of online speech processing may not be a universal result of learning to read, and suggest that the influence of writing on speech processing should also be investigated.
  • Hervais-Adelman, A., Egorova, N., & Golestani, N. (2018). Beyond bilingualism: Multilingual experience correlates with caudate volume. Brain Structure and Function, 223(7), 3495-3502. doi:10.1007/s00429-018-1695-0.

    Abstract

    The multilingual brain implements mechanisms that serve to select the appropriate language as a function of the communicative environment. Engaging these mechanisms on a regular basis appears to have consequences for brain structure and function. Studies have implicated the caudate nuclei as important nodes in polyglot language control processes, and have also shown structural differences in the caudate nuclei in bilingual compared to monolingual populations. However, the majority of published work has focused on the categorical differences between monolingual and bilingual individuals, and little is known about whether these findings extend to multilingual individuals, who have even greater language control demands. In the present paper, we present an analysis of the volume and morphology of the caudate nuclei, putamen, pallidum and thalami in 75 multilingual individuals who speak three or more languages. Volumetric analyses revealed a significant relationship between multilingual experience and right caudate volume, as well as a marginally significant relationship with left caudate volume. Vertex-wise analyses revealed a significant enlargement of dorsal and anterior portions of the left caudate nucleus, known to have connectivity with executive brain regions, as a function of multilingual expertise. These results suggest that multilingual expertise might exercise a continuous impact on brain structure, and that as additional languages beyond a second are acquired, the additional demands for linguistic and cognitive control result in modifications to brain structures associated with language management processes.
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2018). Commentary: Broca pars triangularis constitutes a “hub” of the language-control network during simultaneous language translation. Frontiers in Human Neuroscience, 12: 22. doi:10.3389/fnhum.2018.00022.

    Abstract

    A commentary on
    Broca Pars Triangularis Constitutes a “Hub” of the Language-Control Network during Simultaneous Language Translation

    by Elmer, S. (2016). Front. Hum. Neurosci. 10:491. doi: 10.3389/fnhum.2016.00491

    Elmer (2016) conducted an fMRI investigation of “simultaneous language translation” in five participants. The article presents group and individual analyses of German-to-Italian and Italian-to-German translation, confined to a small set of anatomical regions previously reported to be involved in multilingual control. Here we take the opportunity to discuss concerns regarding certain aspects of the study.
  • Heyne, H. O., Singh, T., Stamberger, H., Jamra, R. A., Caglayan, H., Craiu, D., Guerrini, R., Helbig, K. L., Koeleman, B. P. C., Kosmicki, J. A., Linnankivi, T., May, P., Muhle, H., Møller, R. S., Neubauer, B. A., Palotie, A., Pendziwiat, M., Striano, P., Tang, S., Wu, S. and 9 moreHeyne, H. O., Singh, T., Stamberger, H., Jamra, R. A., Caglayan, H., Craiu, D., Guerrini, R., Helbig, K. L., Koeleman, B. P. C., Kosmicki, J. A., Linnankivi, T., May, P., Muhle, H., Møller, R. S., Neubauer, B. A., Palotie, A., Pendziwiat, M., Striano, P., Tang, S., Wu, S., EuroEPINOMICS RES Consortium, De Kovel, C. G. F., Poduri, A., Weber, Y. G., Weckhuysen, S., Sisodiya, S. M., Daly, M. J., Helbig, I., Lal, D., & Lemke, J. R. (2018). De novo variants in neurodevelopmental disorders with epilepsy. Nature Genetics, 50, 1048-1053. doi:10.1038/s41588-018-0143-7.

    Abstract

    Epilepsy is a frequent feature of neurodevelopmental disorders (NDDs), but little is known about genetic differences between NDDs with and without epilepsy. We analyzed de novo variants (DNVs) in 6,753 parent–offspring trios ascertained to have different NDDs. In the subset of 1,942 individuals with NDDs with epilepsy, we identified 33 genes with a significant excess of DNVs, of which SNAP25 and GABRB2 had previously only limited evidence of disease association. Joint analysis of all individuals with NDDs also implicated CACNA1E as a novel disease-associated gene. Comparing NDDs with and without epilepsy, we found missense DNVs, DNVs in specific genes, age of recruitment, and severity of intellectual disability to be associated with epilepsy. We further demonstrate the extent to which our results affect current genetic testing as well as treatment, emphasizing the benefit of accurate genetic diagnosis in NDDs with epilepsy.
  • Heyselaar, E., Mazaheri, A., Hagoort, P., & Segaert, K. (2018). Changes in alpha activity reveal that social opinion modulates attention allocation during face processing. NeuroImage, 174, 432-440. doi:10.1016/j.neuroimage.2018.03.034.

    Abstract

    Participants’ performance differs when conducting a task in the presence of a secondary individual, moreover the opinion the participant has of this individual also plays a role. Using EEG, we investigated how previous interactions with, and evaluations of, an avatar in virtual reality subsequently influenced attentional allocation to the face of that avatar. We focused on changes in the alpha activity as an index of attentional allocation. We found that the onset of an avatar’s face whom the participant had developed a rapport with induced greater alpha suppression. This suggests greater attentional resources are allocated to the interacted-with avatars. The evaluative ratings of the avatar induced a U-shaped change in alpha suppression, such that participants paid most attention when the avatar was rated as average. These results suggest that attentional allocation is an important element of how behaviour is altered in the presence of a secondary individual and is modulated by our opinion of that individual.

    Additional information

    mmc1.docx
  • Heyselaar, E., Peeters, D., & Hagoort, P. (2021). Do we predict upcoming speech content in naturalistic environments? Language, Cognition and Neuroscience, 36(4), 440-461. doi:10.1080/23273798.2020.1859568.

    Abstract

    The ability to predict upcoming actions is a hallmark of cognition. It remains unclear, however, whether the predictive behaviour observed in controlled lab environments generalises to rich, everyday settings. In four virtual reality experiments, we tested whether a well-established marker of linguistic prediction (anticipatory eye movements) replicated when increasing the naturalness of the paradigm by means of immersing participants in naturalistic scenes (Experiment 1), increasing the number of distractor objects (Experiment 2), modifying the proportion of predictable noun-referents (Experiment 3), and manipulating the location of referents relative to the joint attentional space (Experiment 4). Robust anticipatory eye movements were observed for Experiments 1–3. The anticipatory effect disappeared, however, in Experiment 4. Our findings suggest that predictive processing occurs in everyday communication if the referents are situated in the joint attentional space. Methodologically, our study confirms that ecological validity and experimental control may go hand-in-hand in the study of human predictive behaviour.
  • Hickman, L. J., Keating, C. T., Ferrari, A., & Cook, J. L. (2022). Skin conductance as an index of alexithymic traits in the general population. Psychological Reports, 125(3), 1363-1379. doi:10.1177/00332941211005118.

    Abstract

    Alexithymia concerns a difficulty identifying and communicating one’s own emotions, and a tendency towards externally-oriented thinking. Recent work argues that such alexithymic traits are due to altered arousal response and poor subjective awareness of “objective” arousal responses. Although there are individual differences within the general population in identifying and describing emotions, extant research has focused on highly alexithymic individuals. Here we investigated whether mean arousal and concordance between subjective and objective arousal underpin individual differences in alexithymic traits in a general population sample. Participants rated subjective arousal responses to 60 images from the International Affective Picture System whilst their skin conductance was recorded. The Autism Quotient was employed to control for autistic traits in the general population. Analysis using linear models demonstrated that mean arousal significantly predicted Toronto Alexithymia Scale scores above and beyond autistic traits, but concordance scores did not. This indicates that, whilst objective arousal is a useful predictor in populations that are both above and below the cut-off values for alexithymia, concordance scores between objective and subjective arousal do not predict variation in alexithymic traits in the general population.
  • Hilverman, C., Clough, S., Duff, M. C., & Cook, S. W. (2018). Patients with hippocampal amnesia successfully integrate gesture and speech. Neuropsychologia, 117, 332-338. doi:10.1016/j.neuropsychologia.2018.06.012.

    Abstract

    During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus known for its role in relational memory and information integration is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in. gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms.
  • Hoeksema, N., Verga, L., Mengede, J., Van Roessel, C., Villanueva, S., Salazar-Casals, A., Rubio-Garcia, A., Curcic-Blake, B., Vernes, S. C., & Ravignani, A. (2021). Neuroanatomy of the grey seal brain: Bringing pinnipeds into the neurobiological study of vocal learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200252. doi:10.1098/rstb.2020.0252.

    Abstract

    Comparative studies of vocal learning and vocal non-learning animals can increase our understanding of the neurobiology and evolution of vocal learning and human speech. Mammalian vocal learning is understudied: most research has either focused on vocal learning in songbirds or its absence in non-human primates. Here we focus on a highly promising model species for the neurobiology of vocal learning: grey seals. We provide a neuroanatomical atlas (based on dissected brain slices and magnetic resonance images), a labelled MRI template, a 3D model with volumetric measurements of brain regions, and histological cortical stainings. Four main features of the grey seal brain stand out. (1) It is relatively big and highly convoluted. (2) It hosts a relatively large temporal lobe and cerebellum, structures which could support developed timing abilities and acoustic processing. (3) The cortex is similar to humans in thickness and shows the expected six-layered mammalian structure. (4) Expression of FoxP2 - a gene involved in vocal learning and spoken language - is present in deeper layers of the cortex. Our results could facilitate future studies targeting the neural and genetic underpinnings of mammalian vocal learning, thus bridging the research gap from songbirds to humans and non-human primates.Competing Interest StatementThe authors have declared no competing interest.
  • Hoey, E. (2018). How speakers continue with talk after a lapse in conversation. Research on Language and Social Interaction, 51(3), 329-346. doi:10.1080/08351813.2018.1485234.

    Abstract

    How do conversational participants continue with turn-by-turn talk after a momentary lapse? If all participants forgo the option to speak at possible sequence completion, an extended silence may emerge that can indicate a lack of anything to talk about next. For the interaction to proceed recognizably as a conversation, the postlapse turn needs to implicate more talk. Using conversation analysis, I examine three practical alternatives regarding sequentially implicative postlapse turns: Participants may move to end the interaction, continue with some prior matter, or start something new. Participants are shown using resources grounded in the interaction’s overall structural organization, the materials from the interaction-so-far, the mentionables they bring to interaction, and the situated environment itself. Comparing these alternatives, there’s suggestive quantitative evidence for a preference for continuation. The analysis of lapse resolution shows lapses as places for the management of multiple possible courses of action. Data are in U.S. and UK English.
  • Hoey, E., Hömke, P., Löfgren, E., Neumann, T., Schuerman, W. L., & Kendrick, K. H. (2021). Using expletive insertion to pursue and sanction in interaction. Journal of Sociolinguistics, 25(1), 3-25. doi:10.1111/josl.12439.

    Abstract

    This article uses conversation analysis to examine constructions like who the fuck is that—sequence‐initiating actions into which an expletive like the fuck has been inserted. We describe how this turn‐constructional practice fits into and constitutes a recurrent sequence of escalating actions. In this sequence, it is used to pursue an adequate response after an inadequate one was given, and sanction the recipient for that inadequate response. Our analysis contributes to sociolinguistic studies of swearing by offering an account of swearing as a resource for social action.
  • Holler, J., Drijvers, L., Rafiee, A., & Majid, A. (2022). Embodied space-pitch associations are shaped by language. Cognitive Science, 46(2): e13083. doi:10.1111/cogs.13083.

    Abstract

    Height-pitch associations are claimed to be universal and independent of language, but this claim remains controversial. The present study sheds new light on this debate with a multimodal analysis of individual sound and melody descriptions obtained in an interactive communication paradigm with speakers of Dutch and Farsi. The findings reveal that, in contrast to Dutch speakers, Farsi speakers do not use a height-pitch metaphor consistently in speech. Both Dutch and Farsi speakers’ co-speech gestures did reveal a mapping of higher pitches to higher space and lower pitches to lower space, and this gesture space-pitch mapping tended to co-occur with corresponding spatial words (high-low). However, this mapping was much weaker in Farsi speakers than Dutch speakers. This suggests that cross-linguistic differences shape the conceptualization of pitch and further calls into question the universality of height-pitch associations.

    Additional information

    supporting information
  • Holler, J. (2022). Visual bodily signals as core devices for coordinating minds in interaction. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377(1859): 20210094. doi:10.1098/rstb.2021.0094.

    Abstract

    The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed—and survived—owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine.
  • Holler, J., Bavelas, J., Woods, J., Geiger, M., & Simons, L. (2022). Given-new effects on the duration of gestures and of words in face-to-face dialogue. Discourse Processes, 59(8), 619-645. doi:10.1080/0163853X.2022.2107859.

    Abstract

    The given-new contract entails that speakers must distinguish for their addressee whether references are new or already part of their dialogue. Past research had found that, in a monologue to a listener, speakers shortened repeated words. However, the notion of the given-new contract is inherently dialogic, with an addressee and the availability of co-speech gestures. Here, two face-to-face dialogue experiments tested whether gesture duration also follows the given-new contract. In Experiment 1, four experimental sequences confirmed that when speakers repeated their gestures, they shortened the duration significantly. Experiment 2 replicated the effect with spontaneous gestures in a different task. This experiment also extended earlier results with words, confirming that speakers shortened their repeated words significantly in a multimodal dialogue setting, the basic form of language use. Because words and gestures were not necessarily redundant, these results offer another instance in which gestures and words independently serve pragmatic requirements of dialogue.
  • Holler, J., & Beattie, G. (2003). How iconic gestures and speech interact in the representation of meaning: are both aspects really integral to the process? Semiotica, 146, 81-116.
  • Holler, J., Kendrick, K. H., & Levinson, S. C. (2018). Processing language in face-to-face conversation: Questions with gestures get faster responses. Psychonomic Bulletin & Review, 25(5), 1900-1908. doi:10.3758/s13423-017-1363-z.

    Abstract

    The home of human language use is face-to-face interaction, a context in which communicative exchanges are characterised not only by bodily signals accompanying what is being said but also by a pattern of alternating turns at talk. This transition between turns is astonishingly fast—typically a mere 200-ms elapse between a current and a next speaker’s contribution—meaning that comprehending, producing, and coordinating conversational contributions in time is a significant challenge. This begs the question of whether the additional information carried by bodily signals facilitates or hinders language processing in this time-pressured environment. We present analyses of multimodal conversations revealing that bodily signals appear to profoundly influence language processing in interaction: Questions accompanied by gestures lead to shorter turn transition times—that is, to faster responses—than questions without gestures, and responses come earlier when gestures end before compared to after the question turn has ended. These findings hold even after taking into account prosodic patterns and other visual signals, such as gaze. The empirical findings presented here provide a first glimpse of the role of the body in the psycholinguistic processes underpinning human communication
  • Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3, 127-154.
  • Holler, J., Alday, P. M., Decuyper, C., Geiger, M., Kendrick, K. H., & Meyer, A. S. (2021). Competition reduces response times in multiparty conversation. Frontiers in Psychology, 12: 693124. doi:10.3389/fpsyg.2021.693124.

    Abstract

    Natural conversations are characterized by short transition times between turns. This holds in particular for multi-party conversations. The short turn transitions in everyday conversations contrast sharply with the much longer speech onset latencies observed in laboratory studies where speakers respond to spoken utterances. There are many factors that facilitate speech production in conversational compared to laboratory settings. Here we highlight one of them, the impact of competition for turns. In multi-party conversations, speakers often compete for turns. In quantitative corpus analyses of multi-party conversation, the fastest response determines the recorded turn transition time. In contrast, in dyadic conversations such competition for turns is much less likely to arise, and in laboratory experiments with individual participants it does not arise at all. Therefore, all responses tend to be recorded. Thus, competition for turns may reduce the recorded mean turn transition times in multi-party conversations for a simple statistical reason: slow responses are not included in the means. We report two studies illustrating this point. We first report the results of simulations showing how much the response times in a laboratory experiment would be reduced if, for each trial, instead of recording all responses, only the fastest responses of several participants responding independently on the trial were recorded. We then present results from a quantitative corpus analysis comparing turn transition times in dyadic and triadic conversations. There was no significant group size effect in question-response transition times, where the present speaker often selects the next one, thus reducing competition between speakers. But, as predicted, triads showed shorter turn transition times than dyads for the remaining turn transitions, where competition for the floor was more likely to arise. Together, these data show that turn transition times in conversation should be interpreted in the context of group size, turn transition type, and social setting.
  • Hömke, P., Holler, J., & Levinson, S. C. (2018). Eye blinks are perceived as communicative signals in human face-to-face interaction. PLoS One, 13(12): e0208030. doi:10.1371/journal.pone.0208030.

    Abstract

    In face-to-face communication, recurring intervals of mutual gaze allow listeners to provide speakers with visual feedback (e.g. nodding). Here, we investigate the potential feedback function of one of the subtlest of human movements—eye blinking. While blinking tends to be subliminal, the significance of mutual gaze in human interaction raises the question whether the interruption of mutual gaze through blinking may also be communicative. To answer this question, we developed a novel, virtual reality-based experimental paradigm, which enabled us to selectively manipulate blinking in a virtual listener, creating small differences in blink duration resulting in ‘short’ (208 ms) and ‘long’ (607 ms) blinks. We found that speakers unconsciously took into account the subtle differences in listeners’ blink duration, producing substantially shorter answers in response to long listener blinks. Our findings suggest that, in addition to physiological, perceptual and cognitive functions, listener blinks are also perceived as communicative signals, directly influencing speakers’ communicative behavior in face-to-face communication. More generally, these findings may be interpreted as shedding new light on the evolutionary origins of mental-state signaling, which is a crucial ingredient for achieving mutual understanding in everyday social interaction.

    Additional information

    Supporting information
  • Hoogman, M., Van Rooij, D., Klein, M., Boedhoe, P., Ilioska, I., Li, T., Patel, Y., Postema, M., Zhang-James, Y., Anagnostou, E., Arango, C., Auzias, G., Banaschewski, T., Bau, C. H. D., Behrmann, M., Bellgrove, M. A., Brandeis, D., Brem, S., Busatto, G. F., Calderoni, S. and 60 moreHoogman, M., Van Rooij, D., Klein, M., Boedhoe, P., Ilioska, I., Li, T., Patel, Y., Postema, M., Zhang-James, Y., Anagnostou, E., Arango, C., Auzias, G., Banaschewski, T., Bau, C. H. D., Behrmann, M., Bellgrove, M. A., Brandeis, D., Brem, S., Busatto, G. F., Calderoni, S., Calvo, R., Castellanos, F. X., Coghill, D., Conzelmann, A., Daly, E., Deruelle, C., Dinstein, I., Durston, S., Ecker, C., Ehrlich, S., Epstein, J. N., Fair, D. A., Fitzgerald, J., Freitag, C. M., Frodl, T., Gallagher, L., Grevet, E. H., Haavik, J., Hoekstra, P. J., Janssen, J., Karkashadze, G., King, J. A., Konrad, K., Kuntsi, J., Lazaro, L., Lerch, J. P., Lesch, K.-P., Louza, M. R., Luna, B., Mattos, P., McGrath, J., Muratori, F., Murphy, C., Nigg, J. T., Oberwelland-Weiss, E., O'Gorman Tuura, R. L., O'Hearn, K., Oosterlaan, J., Parellada, M., Pauli, P., Plessen, K. J., Ramos-Quiroga, J. A., Reif, A., Reneman, L., Retico, A., Rosa, P. G. P., Rubia, K., Shaw, P., Silk, T. J., Tamm, L., Vilarroya, O., Walitza, S., Jahanshad, N., Faraone, S. V., Francks, C., Van den Heuvel, O. A., Paus, T., Thompson, P. M., Buitelaar, J. K., & Franke, B. (2022). Consortium neuroscience of attention deficit/hyperactivity disorder and autism spectrum disorder: The ENIGMA adventure. Human Brain Mapping, 43(1), 37-55. doi:10.1002/hbm.25029.

    Abstract

    Abstract Neuroimaging has been extensively used to study brain structure and function in individuals with attention deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD) over the past decades. Two of the main shortcomings of the neuroimaging literature of these disorders are the small sample sizes employed and the heterogeneity of methods used. In 2013 and 2014, the ENIGMA-ADHD and ENIGMA-ASD working groups were respectively, founded with a common goal to address these limitations. Here, we provide a narrative review of the thus far completed and still ongoing projects of these working groups. Due to an implicitly hierarchical psychiatric diagnostic classification system, the fields of ADHD and ASD have developed largely in isolation, despite the considerable overlap in the occurrence of the disorders. The collaboration between the ENIGMA-ADHD and -ASD working groups seeks to bring the neuroimaging efforts of the two disorders closer together. The outcomes of case–control studies of subcortical and cortical structures showed that subcortical volumes are similarly affected in ASD and ADHD, albeit with small effect sizes. Cortical analyses identified unique differences in each disorder, but also considerable overlap between the two, specifically in cortical thickness. Ongoing work is examining alternative research questions, such as brain laterality, prediction of case–control status, and anatomical heterogeneity. In brief, great strides have been made toward fulfilling the aims of the ENIGMA collaborations, while new ideas and follow-up analyses continue that include more imaging modalities (diffusion MRI and resting-state functional MRI), collaborations with other large databases, and samples with dual diagnoses.
  • Horan Skilton, A., & Peeters, D. (2021). Cross-linguistic differences in demonstrative systems: Comparing spatial and non-spatial influences on demonstrative use in Ticuna and Dutch. Journal of Pragmatics, 180, 248-265. doi:10.1016/j.pragma.2021.05.001.

    Abstract

    In all spoken languages, speakers use demonstratives – words like this and that – to refer to entities in their immediate environment. But which factors determine whether they use one demonstrative (this) or another (that)? Here we report the results of an experiment examining the effects of referent visibility, referent distance, and addressee location on the production of demonstratives by speakers of Ticuna (isolate; Brazil, Colombia, Peru), an Amazonian language with four demonstratives, and speakers of Dutch (Indo-European; Netherlands, Belgium), which has two demonstratives. We found that Ticuna speakers’ use of demonstratives displayed effects of addressee location and referent distance, but not referent visibility. By contrast, under comparable conditions, Dutch speakers displayed sensitivity only to referent distance. Interestingly, we also observed that Ticuna speakers consistently used demonstratives in all referential utterances in our experimental paradigm, while Dutch speakers strongly preferred to use definite articles. Taken together, these findings shed light on the significant diversity found in demonstrative systems across languages. Additionally, they invite researchers studying exophoric demonstratives to broaden their horizons by cross-linguistically investigating the factors involved in speakers’ choice of demonstratives over other types of referring expressions, especially articles.
  • Hörpel, S. G., Baier, L., Peremans, H., Reijniers, J., Wiegrebe, L., & Firzlaff, U. (2021). Communication breakdown: Limits of spectro-temporal resolution for the perception of bat communication calls. Scientific Reports, 11: 13708. doi:10.1038/s41598-021-92842-4.

    Abstract

    During vocal communication, the spectro‑temporal structure of vocalizations conveys important
    contextual information. Bats excel in the use of sounds for echolocation by meticulous encoding of
    signals in the temporal domain. We therefore hypothesized that for social communication as well,
    bats would excel at detecting minute distortions in the spectro‑temporal structure of calls. To test
    this hypothesis, we systematically introduced spectro‑temporal distortion to communication calls of
    Phyllostomus discolor bats. We broke down each call into windows of the same length and randomized
    the phase spectrum inside each window. The overall degree of spectro‑temporal distortion in
    communication calls increased with window length. Modelling the bat auditory periphery revealed
    that cochlear mechanisms allow discrimination of fast spectro‑temporal envelopes. We evaluated
    model predictions with experimental psychophysical and neurophysiological data. We first assessed
    bats’ performance in discriminating original versions of calls from increasingly distorted versions of
    the same calls. We further examined cortical responses to determine additional specializations for
    call discrimination at the cortical level. Psychophysical and cortical responses concurred with model
    predictions, revealing discrimination thresholds in the range of 8–15 ms randomization‑window
    length. Our data suggest that specialized cortical areas are not necessary to impart psychophysical
    resilience to temporal distortion in communication calls.

    Additional information

    supplementary information
  • Howe, L. J., Lee, M. K., Sharp, G. C., Smith, G. D. W., St Pourcain, B., Shaffer, J. R., Ludwig, K. U., Mangold, E., Marazita, M. L., Feingold, E., Zhurov, A., Stergiakouli, E., Sandy, J., Richmond, S., Weinberg, S. M., Hemani, G., & Lewis, S. J. (2018). Investigating the shared genetics of non-syndromic cleft lip/palate and facial morphology. PLoS Genetics, 14(8): e1007501. doi:10.1371/journal.pgen.1007501.

    Abstract

    There is increasing evidence that genetic risk variants for non-syndromic cleft lip/palate (nsCL/P) are also associated with normal-range variation in facial morphology. However, previous analyses are mostly limited to candidate SNPs and findings have not been consistently replicated. Here, we used polygenic risk scores (PRS) to test for genetic overlap between nsCL/P and seven biologically relevant facial phenotypes. Where evidence was found of genetic overlap, we used bidirectional Mendelian randomization (MR) to test the hypothesis that genetic liability to nsCL/P is causally related to implicated facial phenotypes. Across 5,804 individuals of European ancestry from two studies, we found strong evidence, using PRS, of genetic overlap between nsCL/P and philtrum width; a 1 S.D. increase in nsCL/P PRS was associated with a 0.10 mm decrease in philtrum width (95% C.I. 0.054, 0.146; P = 2x10-5). Follow-up MR analyses supported a causal relationship; genetic variants for nsCL/P homogeneously cause decreased philtrum width. In addition to the primary analysis, we also identified two novel risk loci for philtrum width at 5q22.2 and 7p15.2 in our Genome-wide Association Study (GWAS) of 6,136 individuals. Our results support a liability threshold model of inheritance for nsCL/P, related to abnormalities in development of the philtrum.
  • Huettig, F., Kolinsky, R., & Lachmann, T. (2018). The culturally co-opted brain: How literacy affects the human mind. Language, Cognition and Neuroscience, 33(3), 275-277. doi:10.1080/23273798.2018.1425803.

    Abstract

    Introduction to the special issue 'The Effects of Literacy on Cognition and Brain Functioning'
  • Huettig, F., Kolinsky, R., & Lachmann, T. (Eds.). (2018). The effects of literacy on cognition and brain functioning [Special Issue]. Language, Cognition and Neuroscience, 33(3).
  • Huettig, F., Audring, J., & Jackendoff, R. (2022). A parallel architecture perspective on pre-activation and prediction in language processing. Cognition, 224: 105050. doi:10.1016/j.cognition.2022.105050.

    Abstract

    A recent trend in psycholinguistic research has been to posit prediction as an essential function of language processing. The present paper develops a linguistic perspective on viewing prediction in terms of pre-activation. We describe what predictions are and how they are produced. Our basic premises are that (a) no prediction can be made without knowledge to support it; and (b) it is therefore necessary to characterize the precise form of that knowledge, as revealed by a suitable theory of linguistic representations. We describe the Parallel Architecture (PA: Jackendoff, 2002; Jackendoff and Audring, 2020), which makes explicit our commitments about linguistic representations, and we develop an account of processing based on these representations. Crucial to our account is that what have been traditionally treated as derivational rules of grammar are formalized by the PA as lexical items, encoded in the same format as words. We then present a theory of prediction in these terms: linguistic input activates lexical items whose beginning (or incipit) corresponds to the input encountered so far; and prediction amounts to pre-activation of the as yet unheard parts of those lexical items (the remainder). Thus the generation of predictions is a natural byproduct of processing linguistic representations. We conclude that the PA perspective on pre-activation provides a plausible account of prediction in language processing that bridges linguistic and psycholinguistic theorizing.
  • Huettig, F., Lachmann, T., Reis, A., & Petersson, K. M. (2018). Distinguishing cause from effect - Many deficits associated with developmental dyslexia may be a consequence of reduced and suboptimal reading experience. Language, Cognition and Neuroscience, 33(3), 333-350. doi:10.1080/23273798.2017.1348528.

    Abstract

    The cause of developmental dyslexia is still unknown despite decades of intense research. Many causal explanations have been proposed, based on the range of impairments displayed by affected individuals. Here we draw attention to the fact that many of these impairments are also shown by illiterate individuals who have not received any or very little reading instruction. We suggest that this fact may not be coincidental and that the performance differences of both illiterates and individuals with dyslexia compared to literate controls are, to a substantial extent, secondary consequences of either reduced or suboptimal reading experience or a combination of both. The search for the primary causes of reading impairments will make progress if the consequences of quantitative and qualitative differences in reading experience are better taken into account and not mistaken for the causes of reading disorders. We close by providing four recommendations for future research.
  • Huisman, J. L. A., van Hout, R., & Majid, A. (2021). Patterns of semantic variation differ across body parts: evidence from the Japonic languages. Cognitive Linguistics, 32, 455-486. doi:10.1515/cog-2020-0079.

    Abstract

    The human body is central to myriad metaphors, so studying the conceptualisation of the body itself is critical if we are to understand its broader use. One essential but understudied issue is whether languages differ in which body parts they single out for naming. This paper takes a multi-method approach to investigate body part nomenclature within a single language family. Using both a naming task (Study 1) and colouring-in task (Study 2) to collect data from six Japonic languages, we found that lexical similarity for body part terminology was notably differentiated within Japonic, and similar variation was evident in semantics too. Novel application of cluster analysis on naming data revealed a relatively flat hierarchical structure for parts of the face, whereas parts of the body were organised with deeper hierarchical structure. The colouring data revealed that bounded parts show more stability across languages than unbounded parts. Overall, the data reveal there is not a single universal conceptualisation of the body as is often assumed, and that in-depth, multi-method explorations of under-studied languages are urgently required.
  • Huisman, J. L. A., & Majid, A. (2018). Psycholinguistic variables matter in odor naming. Memory & Cognition, 46, 577-588. doi:10.3758/s13421-017-0785-1.

    Abstract

    People from Western societies generally find it difficult to name odors. In trying to explain this, the olfactory literature has proposed several theories that focus heavily on properties of the odor itself but rarely discuss properties of the label used to describe it. However, recent studies show speakers of languages with dedicated smell lexicons can name odors with relative ease. Has the role of the lexicon been overlooked in the olfactory literature? Word production studies show properties of the label, such as word frequency and semantic context, influence naming; but this field of research focuses heavily on the visual domain. The current study combines methods from both fields to investigate word production for olfaction in two experiments. In the first experiment, participants named odors whose veridical labels were either high-frequency or low-frequency words in Dutch, and we found that odors with high-frequency labels were named correctly more often. In the second experiment, edibility was used for manipulating semantic context in search of a semantic interference effect, presenting the odors in blocks of edible and inedible odor source objects to half of the participants. While no evidence was found for a semantic interference effect, an effect of word frequency was again present. Our results demonstrate psycholinguistic variables—such as word frequency—are relevant for olfactory naming, and may, in part, explain why it is difficult to name odors in certain languages. Olfactory researchers cannot afford to ignore properties of an odor’s label.
  • Huizeling, E., Wang, H., Holland, C., & Kessler, K. (2021). Changes in theta and alpha oscillatory signatures of attentional control in older and middle age. European Journal of Neuroscience, 54(1), 4314-4337. doi:10.1111/ejn.15259.

    Abstract

    Recent behavioural research has reported age-related changes in the costs of refocusing attention from a temporal (rapid serial visual presentation) to a spatial (visual search) task. Using magnetoencephalography, we have now compared the neural signatures of attention refocusing between three age groups (19–30, 40–49 and 60+ years) and found differences in task-related modulation and cortical localisation of alpha and theta oscillations. Efficient, faster refocusing in the youngest group compared to both middle age and older groups was reflected in parietal theta effects that were significantly reduced in the older groups. Residual parietal theta activity in older individuals was beneficial to attentional refocusing and could reflect preserved attention mechanisms. Slowed refocusing of attention, especially when a target required consolidation, in the older and middle-aged adults was accompanied by a posterior theta deficit and increased recruitment of frontal (middle-aged and older groups) and temporal (older group only) areas, demonstrating a posterior to anterior processing shift. Theta but not alpha modulation correlated with task performance, suggesting that older adults' stronger and more widely distributed alpha power modulation could reflect decreased neural precision or dedifferentiation but requires further investigation. Our results demonstrate that older adults present with different alpha and theta oscillatory signatures during attentional control, reflecting cognitive decline and, potentially, also different cognitive strategies in an attempt to compensate for decline.

    Additional information

    supplementary material
  • Huizeling, E., Arana, S., Hagoort, P., & Schoffelen, J.-M. (2022). Lexical frequency and sentence context influence the brain’s response to single words. Neurobiology of Language, 3(1), 149-179. doi:10.1162/nol_a_00054.

    Abstract

    Typical adults read remarkably quickly. Such fast reading is facilitated by brain processes that are sensitive to both word frequency and contextual constraints. It is debated as to whether these attributes have additive or interactive effects on language processing in the brain. We investigated this issue by analysing existing magnetoencephalography data from 99 participants reading intact and scrambled sentences. Using a cross-validated model comparison scheme, we found that lexical frequency predicted the word-by-word elicited MEG signal in a widespread cortical network, irrespective of sentential context. In contrast, index (ordinal word position) was more strongly encoded in sentence words, in left front-temporal areas. This confirms that frequency influences word processing independently of predictability, and that contextual constraints affect word-by-word brain responses. With a conservative multiple comparisons correction, only the interaction between lexical frequency and surprisal survived, in anterior temporal and frontal cortex, and not between lexical frequency and entropy, nor between lexical frequency and index. However, interestingly, the uncorrected index*frequency interaction revealed an effect in left frontal and temporal cortex that reversed in time and space for intact compared to scrambled sentences. Finally, we provide evidence to suggest that, in sentences, lexical frequency and predictability may independently influence early (<150ms) and late stages of word processing, but interact during later stages of word processing (>150-250ms), thus helping to converge previous contradictory eye-tracking and electrophysiological literature. Current neuro-cognitive models of reading would benefit from accounting for these differing effects of lexical frequency and predictability on different stages of word processing.
  • Huizeling, E., Peeters, D., & Hagoort, P. (2022). Prediction of upcoming speech under fluent and disfluent conditions: Eye tracking evidence from immersive virtual reality. Language, Cognition and Neuroscience, 37(4), 481-508. doi:10.1080/23273798.2021.1994621.

    Abstract

    Traditional experiments indicate that prediction is important for efficient speech processing. In three virtual reality visual world paradigm experiments, we tested whether such findings hold in naturalistic settings (Experiment 1) and provided novel insights into whether disfluencies in speech (repairs/hesitations) inform one’s predictions in rich environments (Experiments 2–3). Experiment 1 supports that listeners predict upcoming speech in naturalistic environments, with higher proportions of anticipatory target fixations in predictable compared to unpredictable trials. In Experiments 2–3, disfluencies reduced anticipatory fixations towards predicted referents, compared to conjunction (Experiment 2) and fluent (Experiment 3) sentences. Unexpectedly, Experiment 2 provided no evidence that participants made new predictions from a repaired verb. Experiment 3 provided novel findings that fixations towards the speaker increase upon hearing a hesitation, supporting current theories of how hesitations influence sentence processing. Together, these findings unpack listeners’ use of visual (objects/speaker) and auditory (speech/disfluencies) information when predicting upcoming words.
  • Humphries, S., Holler*, J., Crawford, T., & Poliakoff*, E. (2021). Cospeech gestures are a window into the effects of Parkinson’s disease on action representations. Journal of Experimental Psychology: General, 150(8), 1581-1597. doi:10.1037/xge0001002.

    Abstract

    -* indicates joint senior authors - Parkinson’s disease impairs motor function and cognition, which together affect language and
    communication. Co-speech gestures are a form of language-related actions that provide imagistic
    depictions of the speech content they accompany. Gestures rely on visual and motor imagery, but
    it is unknown whether gesture representations require the involvement of intact neural sensory
    and motor systems. We tested this hypothesis with a fine-grained analysis of co-speech action
    gestures in Parkinson’s disease. 37 people with Parkinson’s disease and 33 controls described
    two scenes featuring actions which varied in their inherent degree of bodily motion. In addition
    to the perspective of action gestures (gestural viewpoint/first- vs. third-person perspective), we
    analysed how Parkinson’s patients represent manner (how something/someone moves) and path
    information (where something/someone moves to) in gesture, depending on the degree of bodily
    motion involved in the action depicted. We replicated an earlier finding that people with
    Parkinson’s disease are less likely to gesture about actions from a first-person perspective – preferring instead to depict actions gesturally from a third-person perspective – and show that
    this effect is modulated by the degree of bodily motion in the actions being depicted. When
    describing high motion actions, the Parkinson’s group were specifically impaired in depicting
    manner information in gesture and their use of third-person path-only gestures was significantly
    increased. Gestures about low motion actions were relatively spared. These results inform our
    understanding of the neural and cognitive basis of gesture production by providing
    neuropsychological evidence that action gesture production relies on intact motor network
    function.

    Additional information

    Open data and code
  • Hustá, C., Zheng, X., Papoutsi, C., & Piai, V. (2021). Electrophysiological signatures of conceptual and lexical retrieval from semantic memory. Neuropsychologia, 161: 107988. doi:10.1016/j.neuropsychologia.2021.107988.

    Abstract

    Retrieval from semantic memory of conceptual and lexical information is essential for producing speech. It is unclear whether there are differences in the neural mechanisms of conceptual and lexical retrieval when spreading activation through semantic memory is initiated by verbal or nonverbal settings. The same twenty participants took part in two EEG experiments. The first experiment examined conceptual and lexical retrieval following nonverbal settings, whereas the second experiment was a replication of previous studies examining conceptual and lexical retrieval following verbal settings. Target pictures were presented after constraining and nonconstraining contexts. In the nonverbal settings, contexts were provided as two priming pictures (e.g., constraining: nest, feather; nonconstraining: anchor, lipstick; target picture: BIRD). In the verbal settings, contexts were provided as sentences (e.g., constraining: “The farmer milked a...”; nonconstraining: “The child drew a...”; target picture: COW). Target pictures were named faster following constraining contexts in both experiments, indicating that conceptual preparation starts before target picture onset in constraining conditions. In the verbal experiment, we replicated the alpha-beta power decreases in constraining relative to nonconstraining conditions before target picture onset. No such power decreases were found in the nonverbal experiment. Power decreases in constraining relative to nonconstraining conditions were significantly different between experiments. Our findings suggest that participants engage in conceptual preparation following verbal and nonverbal settings, albeit differently. The retrieval of a target word, initiated by verbal settings, is associated with alpha-beta power decreases. By contrast, broad conceptual preparation alone, prompted by nonverbal settings, does not seem enough to elicit alpha-beta power decreases. These findings have implications for theories of oscillations and semantic memory.

    Additional information

    1-s2.0-S0028393221002414-mmc1.pdf
  • Ille, S., Ohlerth, A.-K., Colle, D., Colle, H., Dragoy, O., Goodden, J., Robe, P., Rofes, A., Mandonnet, E., Robert, E., Satoer, D., Viegas, C., Visch-Brink, E., van Zandvoort, M., & Krieg, S. (2021). Augmented reality for the virtual dissection of white matter pathways. Acta Neurochirurgica, (4), 895-903. doi:10.1007/s00701-019-04159-x.

    Abstract

    Background The human white matter pathway network is complex and of critical importance for functionality. Thus, learning
    and understanding white matter tract anatomy is important for the training of neuroscientists and neurosurgeons. The study aims
    to test and evaluate a new method for fiber dissection using augmented reality (AR) in a group which is experienced in cadaver
    white matter dissection courses and in vivo tractography.
    Methods Fifteen neurosurgeons, neurolinguists, and neuroscientists participated in this questionnaire-based study. We presented
    five cases of patients with left-sided perisylvian gliomas who underwent awake craniotomy. Diffusion tensor imaging fiber
    tracking (DTI FT) was performed and the language-related networks were visualized separated in different tracts by color.
    Participants were able to virtually dissect the prepared DTI FTs using a spatial computer and AR goggles. The application
    was evaluated through a questionnaire with answers from 0 (minimum) to 10 (maximum).
    Results Participants rated the overall experience of AR fiber dissection with a median of 8 points (mean ± standard deviation 8.5 ± 1.4).
    Usefulness for fiber dissection courses and education in general was rated with 8 (8.3 ± 1.4) and 8 (8.1 ± 1.5) points, respectively.
    Educational value was expected to be high for several target audiences (student: median 9, 8.6 ± 1.4; resident: 9, 8.5 ± 1.8; surgeon: 9,
    8.2 ± 2.4; scientist: 8.5, 8.0 ± 2.4). Even clinical application of AR fiber dissection was expected to be of value with a median of 7
    points (7.0 ± 2.5)
  • Inacio, F., Faisca, L., Forkstam, C., Araujo, S., Bramao, I., Reis, A., & Petersson, K. M. (2018). Implicit sequence learning is preserved in dyslexic children. Annals of Dyslexia, 68(1), 1-14. doi:10.1007/s11881-018-0158-x.

    Abstract

    This study investigates the implicit sequence learning abilities of dyslexic children using an artificial grammar learning task with an extended exposure period. Twenty children with developmental dyslexia participated in the study and were matched with two control groups—one matched for age and other for reading skills. During 3 days, all participants performed an acquisition task, where they were exposed to colored geometrical forms sequences with an underlying grammatical structure. On the last day, after the acquisition task, participants were tested in a grammaticality classification task. Implicit sequence learning was present in dyslexic children, as well as in both control groups, and no differences between groups were observed. These results suggest that implicit learning deficits per se cannot explain the characteristic reading difficulties of the dyslexics.
  • Indefrey, P., & Levelt, W. J. M. (1999). A meta-analysis of neuroimaging experiments on word production. Neuroimage, 7, 1028.
  • Indefrey, P. (1999). Some problems with the lexical status of nondefault inflection. Behavioral and Brain Sciences, 22(6), 1025. doi:10.1017/S0140525X99342229.

    Abstract

    Clahsen's characterization of nondefault inflection as based exclusively on lexical entries does not capture the full range of empirical data on German inflection. In the verb system differential effects of lexical frequency seem to be input-related rather than affecting morphological production. In the noun system, the generalization properties of -n and -e plurals exceed mere analogy-based productivity.
  • Isbilen, E. S., Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2022). Statistically based chunking of nonadjacent dependencies. Journal of Experimental Psychology: General, 151(11), 2623-2640. doi:10.1037/xge0001207.

    Abstract

    How individuals learn complex regularities in the environment and generalize them to new instances is a key question in cognitive science. Although previous investigations have advocated the idea that learning and generalizing depend upon separate processes, the same basic learning mechanisms may account for both. In language learning experiments, these mechanisms have typically been studied in isolation of broader cognitive phenomena such as memory, perception, and attention. Here, we show how learning and generalization in language is embedded in these broader theories by testing learners on their ability to chunk nonadjacent dependencies—a key structure in language but a challenge to theories that posit learning through the memorization of structure. In two studies, adult participants were trained and tested on an artificial language containing nonadjacent syllable dependencies, using a novel chunking-based serial recall task involving verbal repetition of target sequences (formed from learned strings) and scrambled foils. Participants recalled significantly more syllables, bigrams, trigrams, and nonadjacent dependencies from sequences conforming to the language’s statistics (both learned and generalized sequences). They also encoded and generalized specific nonadjacent chunk information. These results suggest that participants chunk remote dependencies and rapidly generalize this information to novel structures. The results thus provide further support for learning-based approaches to language acquisition, and link statistical learning to broader cognitive mechanisms of memory.
  • Jackson, C. N., Mormer, E., & Brehm, L. (2018). The production of subject-verb agreement among Swedish and Chinese second language speakers of English. Studies in Second Language Acquisition, 40(4), 907-921. doi: 10.1017/S0272263118000025.

    Abstract

    This study uses a sentence completion task with Swedish and Chinese L2 English speakers to investigate how L1 morphosyntax and L2 proficiency influence L2 English subject-verb agreement production. Chinese has limited nominal and verbal number morphology, while Swedish has robust noun phrase (NP) morphology but does not number-mark verbs. Results showed that like L1 English speakers, both L2 groups used grammatical and conceptual number to produce subject-verb agreement. However, only L1 Chinese speakers—and less-proficient speakers in both L2 groups—were similarly influenced by grammatical and conceptual number when producing the subject NP. These findings demonstrate how L2 proficiency, perhaps combined with cross-linguistic differences, influence L2 production and underscore that encoding of noun and verb number are not independent.
  • Jacobs, A. M., & Willems, R. M. (2018). The fictive brain: Neurocognitive correlates of engagement in literature. Review of General Psychology, 22(2), 147-160. doi:10.1037/gpr0000106.

    Abstract

    Fiction is vital to our being. Many people enjoy engaging with fiction every day. Here we focus on literary reading as 1 instance of fiction consumption from a cognitive neuroscience perspective. The brain processes which play a role in the mental construction of fiction worlds and the related engagement with fictional characters, remain largely unknown. The authors discuss the neurocognitive poetics model (Jacobs, 2015a) of literary reading specifying the likely neuronal correlates of several key processes in literary reading, namely inference and situation model building, immersion, mental simulation and imagery, figurative language and style, and the issue of distinguishing fact from fiction. An overview of recent work on these key processes is followed by a discussion of methodological challenges in studying the brain bases of fiction processing
  • Jadoul, Y., Thompson, B., & De Boer, B. (2018). Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics, 71, 1-15. doi:10.1016/j.wocn.2018.07.001.

    Abstract

    This paper introduces Parselmouth, an open-source Python library that facilitates access to core functionality of Praat in Python, in an efficient and programmer-friendly way. We introduce and motivate the package, and present simple usage examples. Specifically, we focus on applications in data visualisation, file manipulation, audio manipulation, statistical analysis, and integration of Parselmouth into a Python-based experimental design for automated, in-the-loop manipulation of acoustic data. Parselmouth is available at https://github.com/YannickJadoul/Parselmouth.
  • Yu, X., Janse, E., & Schoonen, R. (2021). The effect of learning context on L2 listening development. Studies in Second Language Acquisition, 43(2), 329-354. doi:10.1017/S0272263120000534.

    Abstract

    Little research has been done on the effect of learning context on L2 listening development. Motivated by DeKeyser’s (2015) skill acquisition theory of second language acquisition, this study compares L2 listening development in study abroad (SA) and at home (AH) contexts from both language knowledge and processing perspectives. One hundred forty-nine Chinese postgraduates studying in either China or the United Kingdom participated in a battery of listening tasks at the beginning and at the end of an academic year. These tasks measure auditory vocabulary knowledge and listening processing efficiency (i.e., accuracy, speed, and stability of processing) in word recognition, grammatical processing, and semantic analysis. Results show that, provided equal starting levels, the SA learners made more progress than the AH learners in speed of processing across the language processing tasks, with less clear results for vocabulary acquisition. Studying abroad may be an effective intervention for L2 learning, especially in terms of processing speed.
  • Yu, X., Janse, E., & Schoonen, R. (2021). The effect of learning context on L2 listening development: Knowledge and processing. Studies in Second Language Acquisition, 43, 329-354. doi:10.1017/S0272263120000534.

    Abstract

    Little research has been done on the effect of learning context on L2 listening development. Motivated by DeKeyser’s (2015) skill acquisition theory of second language acquisition, this study compares L2 listening development in study abroad (SA) and at home (AH) contexts from both language knowledge and processing perspectives. One hundred forty-nine Chinese postgraduates studying in either China or the United Kingdom participated in a battery of listening tasks at the beginning and at the end of an academic year. These tasks measure auditory vocabulary knowledge and listening processing efficiency (i.e., accuracy, speed, and stability of processing) in word recognition, grammatical processing, and semantic analysis. Results show that, provided equal starting levels, the SA learners made more progress than the AH learners in speed of processing across the language processing tasks, with less clear results for vocabulary acquisition. Studying abroad may be an effective intervention for L2 learning, especially in terms of processing speed.
  • Janse, E., & Andringa, S. J. (2021). The roles of cognitive abilities and hearing acuity in older adults’ recognition of words taken from fast and spectrally reduced speech. Applied Psycholinguistics, 42(3), 763-790. doi:10.1017/S0142716421000047.

    Abstract

    Previous literature has identified several cognitive abilities as predictors of individual differences in speech perception. Working memory was chief among them, but effects have also been found for processing speed. Most research has been conducted on speech in noise, but fast and unclear articulation also makes listening challenging, particularly for older listeners. As a first step toward specifying the cognitive mechanisms underlying spoken word recognition, we set up this study to determine which factors explain unique variation in word identification accuracy in fast speech, and the extent to which this was affected by further degradation of the speech signal. To that end, 105 older adults were tested on identification accuracy of fast words in unaltered and degraded conditions in which the speech stimuli were low-pass filtered. They were also tested on processing speed, memory, vocabulary knowledge, and hearing sensitivity. A structural equation analysis showed that only memory and hearing sensitivity explained unique variance in word recognition in both listening conditions. Working memory was more strongly associated with performance in the unfiltered than in the filtered condition. These results suggest that memory skills, rather than speed, facilitate the mapping of single words onto stored lexical representations, particularly in conditions of medium difficulty.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Jansen, N. A., Braden, R. O., Srivastava, S., Otness, E. F., Lesca, G., Rossi, M., Nizon, M., Bernier, R. A., Quelin, C., Van Haeringen, A., Kleefstra, T., Wong, M. M. K., Whalen, S., Fisher, S. E., Morgan, A. T., & Van Bon, B. W. (2021). Clinical delineation of SETBP1 haploinsufficiency disorder. European Journal of Human Genetics, 29, 1198 -1205. doi:10.1038/s41431-021-00888-9.

    Abstract

    SETBP1 haploinsufficiency disorder (MIM#616078) is caused by haploinsufficiency of SETBP1 on chromosome 18q12.3, but there has not yet been any systematic evaluation of the major features of this monogenic syndrome, assessing penetrance and expressivity. We describe the first comprehensive study to delineate the associated clinical phenotype, with findings from 34 individuals, including 24 novel cases, all of whom have a SETBP1 loss-of-function variant or single (coding) gene deletion, confirmed by molecular diagnostics. The most commonly reported clinical features included mild motor developmental delay, speech impairment, intellectual disability, hypotonia, vision impairment, attention/concentration deficits, and hyperactivity. Although there is a mild overlap in certain facial features, the disorder does not lead to a distinctive recognizable facial gestalt. As well as providing insight into the clinical spectrum of SETBP1 haploinsufficiency disorder, this reports puts forward care recommendations for patient management.

    Additional information

    supplementary table
  • Janssen, J., Díaz-Caneja, C. M., Alloza, C., Schippers, A., De Hoyos, L., Santonja, J., Gordaliza, P. M., Buimer, E. E. L., van Haren, N. E. M., Cahn, W., Arango, C., Kahn, R. S., Hulshoff Pol, H. E., & Schnack, H. G. (2021). Dissimilarity in sulcal width patterns in the cortex can be used to identify patients with schizophrenia with extreme deficits in cognitive performance. Schizophrenia Bulletin, 47(2), 552-561. doi:10.1093/schbul/sbaa131.

    Abstract

    Schizophrenia is a biologically complex disorder with multiple regional deficits in cortical brain morphology. In addition, interindividual heterogeneity of cortical morphological metrics is larger in patients with schizophrenia when compared to healthy controls. Exploiting interindividual differences in the severity of cortical morphological deficits in patients instead of focusing on group averages may aid in detecting biologically informed homogeneous subgroups. The person-based similarity index (PBSI) of brain morphology indexes an individual’s morphometric similarity across numerous cortical regions amongst a sample of healthy subjects. We extended the PBSI such that it indexes the morphometric similarity of an independent individual (eg, a patient) with respect to healthy control subjects. By employing a normative modeling approach on longitudinal data, we determined an individual’s degree of morphometric dissimilarity to the norm. We calculated the PBSI for sulcal width (PBSI-SW) in patients with schizophrenia and healthy control subjects (164 patients and 164 healthy controls; 656 magnetic resonance imaging scans) and associated it with cognitive performance and cortical sulcation index. A subgroup of patients with markedly deviant PBSI-SW showed extreme deficits in cognitive performance and cortical sulcation. Progressive reduction of PBSI-SW in the schizophrenia group relative to healthy controls was driven by these deviating individuals. By explicitly leveraging interindividual differences in the severity of PBSI-SW deficits, neuroimaging-driven subgrouping of patients is feasible. As such, our results pave the way for future applications of morphometric similarity indices for subtyping of clinical populations.

    Files private

    Request files
  • Janssen, R., Moisik, S. R., & Dediu, D. (2018). Modelling human hard palate shape with Bézier curves. PLoS One, 13(2): e0191557. doi:10.1371/journal.pone.0191557.

    Abstract

    People vary at most levels, from the molecular to the cognitive, and the shape of the hard palate (the bony roof of the mouth) is no exception. The patterns of variation in the hard palate are important for the forensic sciences and (palaeo)anthropology, and might also play a role in speech production, both in pathological cases and normal variation. Here we describe a method based on Bézier curves, whose main aim is to generate possible shapes of the hard palate in humans for use in computer simulations of speech production and language evolution. Moreover, our method can also capture existing patterns of variation using few and easy-to-interpret parameters, and fits actual data obtained from MRI traces very well with as little as two or three free parameters. When compared to the widely-used Principal Component Analysis (PCA), our method fits actual data slightly worse for the same number of degrees of freedom. However, it is much better at generating new shapes without requiring a calibration sample, its parameters have clearer interpretations, and their ranges are grounded in geometrical considerations. © 2018 Janssen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
  • Janssens, S. E. W., Sack, A. T., Ten Oever, S., & Graaf, T. A. (2022). Calibrating rhythmic stimulation parameters to individual electroencephalography markers: The consistency of individual alpha frequency in practical lab settings. European Journal of Neuroscience, 55(11/12), 3418-3437. doi:10.1111/ejn.15418.

    Abstract

    Rhythmic stimulation can be applied to modulate neuronal oscillations. Such ‘entrainment’ is optimized when stimulation frequency is individually calibrated based on magneto/encephalography markers. It remains unknown how consistent such individual markers are across days/sessions, within a session, or across cognitive states, hemispheres and estimation methods, especially in a realistic, practical, lab setting. We here estimated individual alpha frequency (IAF) repeatedly from short electroencephalography (EEG) measurements at rest or during an attention task (cognitive state), using single parieto-occipital electrodes in 24 participants on 4 days (between-sessions), with multiple measurements over an hour on 1 day (within-session). First, we introduce an algorithm to automatically reject power spectra without a sufficiently clear peak to ensure unbiased IAF estimations. Then we estimated IAF via the traditional ‘maximum’ method and a ‘Gaussian fit’ method. IAF was reliable within- and between-sessions for both cognitive states and hemispheres, though task-IAF estimates tended to be more variable. Overall, the ‘Gaussian fit’ method was more reliable than the ‘maximum’ method. Furthermore, we evaluated how far from an approximated ‘true’ task-related IAF the selected ‘stimulation frequency’ was, when calibrating this frequency based on a short rest-EEG, a short task-EEG, or simply selecting 10 Hz for all participants. For the ‘maximum’ method, rest-EEG calibration was best, followed by task-EEG, and then 10 Hz. For the ‘Gaussian fit’ method, rest-EEG and task-EEG-based calibration were similarly accurate, and better than 10 Hz. These results lead to concrete recommendations about valid, and automated, estimation of individual oscillation markers in experimental and clinical settings.
  • Janssens, S. E., Ten Oever, S., Sack, A. T., & de Graaf, T. A. (2022). “Broadband Alpha Transcranial Alternating Current Stimulation”: Exploring a new biologically calibrated brain stimulation protocol. NeuroImage, 253: 119109. doi:10.1016/j.neuroimage.2022.119109.

    Abstract

    Transcranial alternating current stimulation (tACS) can be used to study causal contributions of oscillatory brain mechanisms to cognition and behavior. For instance, individual alpha frequency (IAF) tACS was reported to enhance alpha power and impact visuospatial attention performance. Unfortunately, such results have been inconsistent and difficult to replicate. In tACS, stimulation generally involves one frequency, sometimes individually calibrated to a peak value observed in an M/EEG power spectrum. Yet, the ‘peak’ actually observed in such power spectra often contains a broader range of frequencies, raising the question whether a biologically calibrated tACS protocol containing this fuller range of alpha-band frequencies might be more effective. Here, we introduce ‘Broadband-alpha-tACS’, a complex individually calibrated electrical stimulation protocol. We band-pass filtered left posterior resting-state EEG data around the IAF (+/- 2 Hz), and converted that time series into an electrical waveform for tACS stimulation of that same left posterior parietal cortex location. In other words, we stimulated a brain region with a ‘replay’ of its own alpha-band frequency content, based on spontaneous activity. Within-subjects (N=24), we compared to a sham tACS session the effects of broadband-alpha tACS, power-matched spectral inverse (‘alpha-removed’) control tACS, and individual alpha frequency tACS, on EEG alpha power and performance in an endogenous attention task previously reported to be affected by alpha tACS. Broadband-alpha-tACS significantly modulated attention task performance (i.e., reduced the rightward visuospatial attention bias in trials without distractors, and reduced attention benefits). Alpha-removed tACS also reduced the rightward visuospatial attention bias. IAF-tACS did not significantly modulate attention task performance compared to sham tACS, but also did not statistically significantly differ from broadband-alpha-tACS. This new broadband-alpha tACS approach seems promising, but should be further explored and validated in future studies.

    Additional information

    supplementary materials
  • Jara-Ettinger, J., & Rubio-Fernández, P. (2022). The social basis of referential communication: Speakers construct physical reference based on listeners’ expected visual search. Psychological Review, 129, 1394-1413. doi:10.1037/rev0000345.

    Abstract

    A foundational assumption of human communication is that speakers should say as much as necessary, but no more. Yet, people routinely produce redundant adjectives and their propensity to do so varies cross-linguistically. Here, we propose a computational theory, whereby speakers create referential expressions designed to facilitate listeners’ reference resolution, as they process words in real time. We present a computational model of our account, the Incremental Collaborative Efficiency (ICE) model, which generates referential expressions by considering listeners’ real-time incremental processing and reference identification. We apply the ICE framework to physical reference, showing that listeners construct expressions designed to minimize listeners’ expected visual search effort during online language processing. Our model captures a number of known effects in the literature, including cross-linguistic differences in speakers’ propensity to over-specify. Moreover, the ICE model predicts graded acceptability judgments with quantitative accuracy, systematically outperforming an alternative, brevity-based model. Our findings suggest that physical reference production is best understood as driven by a collaborative goal to help the listener identify the intended referent, rather than by an egocentric effort to minimize utterance length.
  • Jara-Ettinger, J., & Rubio-Fernández, P. (2021). Quantitative mental state attributions in language understanding. Science Advances, 7: eabj0970. doi:10.1126/sciadv.abj0970.

    Abstract

    Human social intelligence relies on our ability to infer other people’s mental states such as their beliefs, desires,and intentions. While people are proficient at mental state inference from physical action, it is unknown whether people can make inferences of comparable granularity from simple linguistic events. Here, we show that people can make quantitative mental state attributions from simple referential expressions, replicating the fine-grained inferential structure characteristic of nonlinguistic theory of mind. Moreover, people quantitatively adjust these inferences after brief exposures to speaker-specific speech patterns. These judgments matched the predictions made by our computational model of theory of mind in language, but could not be explained by a simpler qualitative model that attributes mental states deductively. Our findings show how the connection between language and theory of mind runs deep, with their interaction showing in one of the most fundamental forms of human communication: reference.

    Additional information

    https://osf.io/h8qfy/
  • Jeltema, H., Ohlerth, A.-K., de Wit, A., Wagemakers, M., Rofes, A., Bastiaanse, R., & Drost, G. (2021). Comparing navigated transcranial magnetic stimulation mapping and "gold standard" direct cortical stimulation mapping in neurosurgery: a systematic review. Neurosurgical Review, (4), 1903-1920. doi:10.1007/s10143-020-01397-x.

    Abstract

    The objective of this systematic review is to create an overview of the literature on the comparison of navigated transcranial magnetic stimulation (nTMS) as a mapping tool to the current gold standard, which is (intraoperative) direct cortical stimulation (DCS) mapping. A search in the databases of PubMed, EMBASE, and Web of Science was performed. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and recommendations were used. Thirty-five publications were included in the review, describing a total of 552 patients. All studies concerned either mapping of motor or language function. No comparative data for nTMS and DCS for other neurological functions were found. For motor mapping, the distances between the cortical representation of the different muscle groups identified by nTMS and DCS varied between 2 and 16 mm. Regarding mapping of language function, solely an object naming task was performed in the comparative studies on nTMS and DCS. Sensitivity and specificity ranged from 10 to 100% and 13.3–98%, respectively, when nTMS language mapping was compared with DCS mapping. The positive predictive value (PPV) and negative predictive value (NPV) ranged from 17 to 75% and 57–100% respectively. The available evidence for nTMS as a mapping modality for motor and language function is discussed.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jessop, A., & Chang, F. (2022). Thematic role tracking difficulties across multiple visual events influences role use in language production. Visual Cognition, 30(3), 151-173. doi:10.1080/13506285.2021.2013374.

    Abstract

    Language sometimes requires tracking the same participant in different thematic roles across multiple visual events (e.g., The girl that another girl pushed chased a third girl). To better understand how vision and language interact in role tracking, participants described videos of multiple randomly moving circles where two push events were presented. A circle might have the same role in both push events (e.g., agent) or different roles (e.g., agent of one push and patient of other push). The first three studies found higher production accuracy for the same role conditions compared to the different role conditions across different linguistic structure manipulations. The last three studies compared a featural account, where role information was associated with particular circles, or a relational account, where role information was encoded with particular push events. These studies found no interference between different roles, contrary to the predictions of the featural account. The foil was manipulated in these studies to increase the saliency of the second push and it was found that this changed the accuracy in describing the first push. The results suggest that language-related thematic role processing uses a relational representation that can encode multiple events.

    Additional information

    https://doi.org/10.17605/OSF.IO/PKXZH
  • Johnson, E. K., Bruggeman, L., & Cutler, A. (2018). Abstraction and the (misnamed) language familiarity effect. Cognitive Science, 42, 633-645. doi:10.1111/cogs.12520.

    Abstract

    Talkers are recognized more accurately if they are speaking the listeners’ native language rather than an unfamiliar language. This “language familiarity effect” has been shown not to depend upon comprehension and must instead involve language sound patterns. We further examine the level of sound-pattern processing involved, by comparing talker recognition in foreign languages versus two varieties of English, by (a) English speakers of one variety, (b) English speakers of the other variety, and (c) non-native listeners (more familiar with one of the varieties). All listener groups performed better with native than foreign speech, but no effect of language variety appeared: Native listeners discriminated talkers equally well in each, with the native variety never outdoing the other variety, and non-native listeners discriminated talkers equally poorly in each, irrespective of the variety's familiarity. The results suggest that this talker recognition effect rests not on simple familiarity, but on an abstract level of phonological processing
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Jones, G., Cabiddu, F., Andrews, M., & Rowland, C. F. (2021). Chunks of phonological knowledge play a significant role in children’s word learning and explain effects of neighborhood size, phonotactic probability, word frequency and word length. Journal of Memory and Language, 119: 104232. doi:10.1016/j.jml.2021.104232.

    Abstract

    A key omission from many accounts of children’s early word learning is the linguistic knowledge that the child has acquired up to the point when learning occurs. We simulate this knowledge using a computational model that learns phoneme and word sequence knowledge from naturalistic language corpora. We show how this simple model is able to account for effects of word length, word frequency, neighborhood density and phonotactic probability on children’s early word learning. Moreover, we show how effects of neighborhood density and phonotactic probability on word learning are largely influenced by word length, with our model being able to capture all effects. We then use predictions from the model to show how the ease by which a child learns a new word from maternal input is directly influenced by the phonological knowledge that the child has acquired from other words up to the point of encountering the new word. There are major implications of this work: models and theories of early word learning need to incorporate existing sublexical and lexical knowledge in explaining developmental change while well-established indices of word learning are rejected in favor of phonological knowledge of varying grain sizes.

    Additional information

    supplementary data research data
  • Jongman, S. R., Khoe, Y. H., & Hintz, F. (2021). Vocabulary size influences spontaneous speech in native language users: Validating the use of automatic speech recognition in individual differences research. Language and Speech, 64(1), 35-51. doi:10.1177/0023830920911079.

    Abstract

    Previous research has shown that vocabulary size affects performance on laboratory word production tasks. Individuals who know many words show faster lexical access and retrieve more words belonging to pre-specified categories than individuals who know fewer words. The present study examined the relationship between receptive vocabulary size and speaking skills as assessed in a natural sentence production task. We asked whether measures derived from spontaneous responses to every-day questions correlate with the size of participants’ vocabulary. Moreover, we assessed the suitability of automatic speech recognition for the analysis of participants’ responses in complex language production data. We found that vocabulary size predicted indices of spontaneous speech: Individuals with a larger vocabulary produced more words and had a higher speech-silence ratio compared to individuals with a smaller vocabulary. Importantly, these relationships were reliably identified using manual and automated transcription methods. Taken together, our results suggest that spontaneous speech elicitation is a useful method to investigate natural language production and that automatic speech recognition can alleviate the burden of labor-intensive speech transcription.
  • Kalashnikova, M., Escudero, P., & Kidd, E. (2018). The development of fast-mapping and novel word retention strategies in monolingual and bilingual infants. Developmental Science, 21(6): e12674. doi:10.1111/desc.12674.

    Abstract

    The mutual exclusivity (ME) assumption is proposed to facilitate early word learning by guiding infants to map novel words to novel referents. This study assessed the emergence and use of ME to both disambiguate and retain the meanings of novel words across development in 18‐month‐old monolingual and bilingual children (Experiment 1; N = 58), and in a sub‐group of these children again at 24 months of age (Experiment 2: N = 32). Both monolinguals and bilinguals employed ME to select the referent of a novel label to a similar extent at 18 and 24 months. At 18 months, there were also no differences in novel word retention between the two language‐background groups. However, at 24 months, only monolinguals showed the ability to retain these label–object mappings. These findings indicate that the development of the ME assumption as a reliable word‐learning strategy is shaped by children's individual language exposure and experience with language use.

    Files private

    Request files
  • Kanero, J., Geçkin, V., Oranç, C., Mamus, E., Küntay, A. C., & Göksun, T. (2018). Social robots for early language learning: Current evidence and future directions. Child Development Perspectives, 12(3), 146-151. doi:10.1111/cdep.12277.

    Abstract

    In this article, we review research on child–robot interaction (CRI) to discuss how social robots can be used to scaffold language learning in young children. First we provide reasons why robots can be useful for teaching first and second languages to children. Then we review studies on CRI that used robots to help children learn vocabulary and produce language. The studies vary in first and second languages and demographics of the learners (typically developing children and children with hearing and communication impairments). We conclude that, although social robots are useful for teaching language to children, evidence suggests that robots are not as effective as human teachers. However, this conclusion is not definitive because robots that tutor students in language have not been evaluated rigorously and technology is advancing rapidly. We suggest that CRI offers an opportunity for research and list possible directions for that work.
  • Kapteijns, B., & Hintz, F. (2021). Comparing predictors of sentence self-paced reading times: Syntactic complexity versus transitional probability metrics. PLoS One, 16(7): e0254546. doi:10.1371/journal.pone.0254546.

    Abstract

    When estimating the influence of sentence complexity on reading, researchers typically opt for one of two main approaches: Measuring syntactic complexity (SC) or transitional probability (TP). Comparisons of the predictive power of both approaches have yielded mixed results. To address this inconsistency, we conducted a self-paced reading experiment. Participants read sentences of varying syntactic complexity. From two alternatives, we selected the set of SC and TP measures, respectively, that provided the best fit to the self-paced reading data. We then compared the contributions of the SC and TP measures to reading times when entered into the same model. Our results showed that both measures explained significant portions of variance in self-paced reading times. Thus, researchers aiming to measure sentence complexity should take both SC and TP into account. All of the analyses were conducted with and without control variables known to influence reading times (word/sentence length, word frequency and word position) to showcase how the effects of SC and TP change in the presence of the control variables.

    Additional information

    supporting information
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2021). Effects and non-effects of late language exposure on spatial language development: Evidence from deaf adults and children. Language Learning and Development, 17(1), 1-25. doi:10.1080/15475441.2020.1823846.

    Abstract

    Late exposure to the first language, as in the case of deaf children with hearing parents, hinders the production of linguistic expressions, even in adulthood. Less is known about the development of language soon after language exposure and if late exposure hinders all domains of language in children and adults. We compared late signing adults and children (MAge = 8;5) 2 years after exposure to sign language, to their age-matched native signing peers in expressions of two types of locative relations that are acquired in certain cognitive-developmental order: view-independent (IN-ON-UNDER) and view-dependent (LEFT-RIGHT). Late signing children and adults differed from native signers in their use of linguistic devices for view-dependent relations but not for view-independent relations. These effects were also modulated by the morphological complexity. Hindering effects of late language exposure on the development of language in children and adults are not absolute but are modulated by cognitive and linguistic complexity.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2022). Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined. Journal of Child Language. Advance online publication. doi:10.1017/S0305000922000642.

    Abstract

    Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
  • Karaminis, T., Hintz, F., & Scharenborg, O. (2022). The presence of background noise extends the competitor space in native and non-native spoken-word recognition: Insights from computational modeling. Cognitive Science, 46(2): e13110. doi:10.1111/cogs.13110.

    Abstract

    Oral communication often takes place in noisy environments, which challenge spoken-word recognition. Previous research has suggested that the presence of background noise extends the number of candidate words competing with the target word for recognition and that this extension affects the time course and accuracy of spoken-word recognition. In this study, we further investigated the temporal dynamics of competition processes in the presence of background noise, and how these vary in listeners with different language proficiency (i.e., native and non-native) using computational modeling. We developed ListenIN (Listen-In-Noise), a neural-network model based on an autoencoder architecture, which learns to map phonological forms onto meanings in two languages and simulates native and non-native spoken-word comprehension. Simulation A established that ListenIN captures the effects of noise on accuracy rates and the number of unique misperception errors of native and non-native listeners in an offline spoken-word identification task (Scharenborg et al., 2018). Simulation B showed that ListenIN captures the effects of noise in online task settings and accounts for looking preferences of native (Hintz & Scharenborg, 2016) and non-native (new data collected for this study) listeners in a visual-world paradigm. We also examined the model’s activation states during online spoken-word recognition. These analyses demonstrated that the presence of background noise increases the number of competitor words which are engaged in phonological competition and that this happens in similar ways intra- and interlinguistically and in native and non-native listening. Taken together, our results support accounts positing a ‘many-additional-competitors scenario’ for the effects of noise on spoken-word recognition.
  • Karsan, Ç., Özdemir, R. S., Bulut, T., & Hanoğlu, L. (2022). The effects of single-session cathodal and bihemispheric tDCS on fluency in stuttering. Journal of Neurolinguistics, 63(101064): 101064. doi:10.1016/j.jneuroling.2022.101064.

    Abstract

    Developmental stuttering is a fluency disorder that adversely affect many aspects of a person's life. Recent transcranial direct current stimulation (tDCS) studies have shown promise to improve fluency in people who stutter. To date, bihemispheric tDCS has not been investigated in this population. In the present study, we aimed to investigate the effects of single-session bihemispheric and unihemispheric cathodal tDCS on fluency in adults who stutter. We predicted that bihemispheric tDCS with anodal stimulation to the left IFG and cathodal stimulation to the right IFG would improve fluency better than the sham and cathodal tDCS to the right IFG. Seventeen adults who stutter completed this single-blind, crossover, sham-controlled tDCS experiment. All participants received 20 min of tDCS alongside metronome-timed speech during intervention sessions. Three tDCS interventions were administered: bihemispheric tDCS with anodal stimulation to the left IFG and cathodal stimulation to the right IFG, unihemispheric tDCS with cathodal stimulation to the right IFG, and sham stimulation. Speech fluency during reading and conversation was assessed before, immediately after, and one week after each intervention session. There was no significant fluency improvement in conversation for any tDCS interventions. Reading fluency improved following both bihemispheric and cathodal tDCS interventions. tDCS montages were not significantly different in their effects on fluency.

    Files private

    Request files
  • Kartushina, N., Mani, N., Aktan-Erciyes, A., Alaslani, K., Aldrich, N. J., Almohammadi, A., Alroqi, H., Anderson, L. M., Andonova, E., Aussems, S., Babineau, M., Barokova, M., Bergmann, C., Cashon, C., Custode, S., De Carvalho, A., Dimitrova, N., Dynak, A., Farah, R., Fennell, C. and 32 moreKartushina, N., Mani, N., Aktan-Erciyes, A., Alaslani, K., Aldrich, N. J., Almohammadi, A., Alroqi, H., Anderson, L. M., Andonova, E., Aussems, S., Babineau, M., Barokova, M., Bergmann, C., Cashon, C., Custode, S., De Carvalho, A., Dimitrova, N., Dynak, A., Farah, R., Fennell, C., Fiévet, A.-C., Frank, M. C., Gavrilova, M., Gendler-Shalev, H., Gibson, S. P., Golway, K., Gonzalez-Gomez, N., Haman, E., Hannon, E., Havron, N., Hay, J., Hendriks, C., Horowitz-Kraus, T., Kalashnikova, M., Kanero, J., Keller, C., Krajewski, G., Laing, C., Lundwall, R. A., Łuniewska, M., Mieszkowska, K., Munoz, L., Nave, K., Olesen, N., Perry, L., Rowland, C. F., Santos Oliveira, D., Shinskey, J., Veraksa, A., Vincent, K., Zivan, M., & Mayor, J. (2022). COVID-19 first lockdown as a window into language acquisition: Associations between caregiver-child activities and vocabulary gains. Language Development Research, 2, 1-36. doi:10.34842/abym-xv34.

    Abstract

    The COVID-19 pandemic, and the resulting closure of daycare centers worldwide, led to unprecedented changes in children’s learning environments. This period of increased time at home with caregivers, with limited access to external sources (e.g., daycares) provides a unique opportunity to examine the associations between the caregiver-child activities and children’s language development. The vocabularies of 1742 children aged8-36 months across 13 countries and 12 languages were evaluated at the beginning and end of the first lockdown period in their respective countries(from March to September 2020). Children who had less passive screen exposure and whose caregivers read more to them showed larger gains in vocabulary development during lockdown, after controlling for SES and other caregiver-child activities. Children also gained more words than expected (based on normative data) during lockdown; either caregivers were more aware of their child’s development or vocabulary development benefited from intense caregiver-child interaction during lockdown.
  • Kember, H., Choi, J., Yu, J., & Cutler, A. (2021). The processing of linguistic prominence. Language and Speech, 64(2), 413-436. doi:10.1177/0023830919880217.

    Abstract

    Prominence, the expression of informational weight within utterances, can be signaled by
    prosodic highlighting (head-prominence, as in English) or by position (as in Korean edge-prominence).
    Prominence confers processing advantages, even if conveyed only by discourse manipulations. Here
    we compared processing of prominence in English and Korean, using a task that indexes processing
    success, namely recognition memory. In each language, participants’ memory was tested for target
    words heard in sentences in which they were prominent due to prosody, position, both or neither.
    Prominence produced recall advantage, but the relative effects differed across language. For Korean
    listeners the positional advantage was greater, but for English listeners prosodic and syntactic
    prominence had equivalent and additive effects. In a further experiment semantic and phonological
    foils tested depth of processing of the recall targets. Both foil types were correctly rejected,
    suggesting that semantic processing had not reached the level at which word form was no longer
    available. Together the results suggest that prominence processing is primarily driven by universal
    effects of information structure; but language-specific differences in frequency of experience prompt
    different relative advantages of prominence signal types. Processing efficiency increases in each case,
    however, creating more accurate and more rapidly contactable memory representations.
  • Kemmerer, S. K., Sack, A. T., de Graaf, T. A., Ten Oever, S., De Weerd, P., & Schuhmann, T. (2022). Frequency-specific transcranial neuromodulation of alpha power alters visuospatial attention performance. Brain Research, 1782: 147834. doi:10.1016/j.brainres.2022.147834.

    Abstract

    Transcranial alternating current stimulation (tACS) at 10 Hz has been shown to modulate spatial attention. However, the frequency-specificity and the oscillatory changes underlying this tACS effect are still largely unclear. Here, we applied high-definition tACS at individual alpha frequency (IAF), two control frequencies (IAF+/-2Hz) and sham to the left posterior parietal cortex and measured its effects on visuospatial attention performance and offline alpha power (using electroencephalography, EEG). We revealed a behavioural and electrophysiological stimulation effect relative to sham for IAF but not control frequency stimulation conditions: there was a leftward lateralization of alpha power for IAF tACS, which differed from sham for the first out of three minutes following tACS. At a high value of this EEG effect (moderation effect), we observed a leftward attention bias relative to sham. This effect was task-specific, i.e., it could be found in an endogenous attention but not in a detection task. Only in the IAF tACS condition, we also found a correlation between the magnitude of the alpha lateralization and the attentional bias effect. Our results support a functional role of alpha oscillations in visuospatial attention and the potential of tACS to modulate it. The frequency-specificity of the effects suggests that an individualization of the stimulation frequency is necessary in heterogeneous target groups with a large variation in IAF.

    Additional information

    supplementary data
  • Kemmerer, S. K., De Graaf, T. A., Ten Oever, S., Erkens, M., De Weerd, P., & Sack, A. T. (2022). Parietal but not temporoparietal alpha-tACS modulates endogenous visuospatial attention. Cortex, 154, 149-166. doi:10.1016/j.cortex.2022.01.021.

    Abstract

    Visuospatial attention can either be voluntarily directed (endogenous/top-down attention) or automatically triggered (exogenous/bottom-up attention). Recent research showed that dorsal parietal transcranial alternating current stimulation (tACS) at alpha frequency modulates the spatial attentional bias in an endogenous but not in an exogenous visuospatial attention task. Yet, the reason for this task-specificity remains unexplored. Here, we tested whether this dissociation relates to the proposed differential role of the dorsal attention network (DAN) and ventral attention network (VAN) in endogenous and exogenous attention processes respectively. To that aim, we targeted the left and right dorsal parietal node of the DAN, as well as the left and right ventral temporoparietal node of the VAN using tACS at the individual alpha frequency. Every participant completed all four stimulation conditions and a sham condition in five separate sessions. During tACS, we assessed the behavioral visuospatial attention bias via an endogenous and exogenous visuospatial attention task. Additionally, we measured offline alpha power immediately before and after tACS using electroencephalography (EEG). The behavioral data revealed an effect of tACS on the endogenous but not exogenous attention bias, with a greater leftward bias during (sham-corrected) left than right hemispheric stimulation. In line with our hypothesis, this effect was brain area-specific, i.e., present for dorsal parietal but not ventral temporoparietal tACS. However, contrary to our expectations, there was no effect of ventral temporoparietal tACS on the exogenous visuospatial attention bias. Hence, no double dissociation between the two targeted attention networks. There was no effect of either tACS condition on offline alpha power. Our behavioral data reveal that dorsal parietal but not ventral temporoparietal alpha oscillations steer endogenous visuospatial attention. This brain-area specific tACS effect matches the previously proposed dissociation between the DAN and VAN and, by showing that the spatial attention bias effect does not generalize to any lateral posterior tACS montage, renders lateral cutaneous and retinal effects for the spatial attention bias in the dorsal parietal condition unlikely. Yet the absence of tACS effects on the exogenous attention task suggests that ventral temporoparietal alpha oscillations are not functionally relevant for exogenous visuospatial attention. We discuss the potential implications of this finding in the context of an emerging theory on the role of the ventral temporoparietal node.

    Additional information

    supplementary material
  • Kempen, G., & Harbusch, K. (2018). A competitive mechanism selecting verb-second versus verb-final word order in causative and argumentative clauses of spoken Dutch: A corpus-linguistic study. Language Sciences, 69, 30-42. doi:10.1016/j.langsci.2018.05.005.

    Abstract

    In Dutch and German, the canonical order of subject, object(s) and finite verb is ‘verb-second’ (V2) in main but ‘verb-final’ (VF) in subordinate clauses. This occasionally leads to the production of noncanonical word orders. Familiar examples are causative and argumentative clauses introduced by a subordinating conjunction (Du. omdat, Ger. weil ‘because’): the omdat/weil-V2 phenomenon. Such clauses may also be introduced by coordinating conjunctions (Du. want, Ger. denn), which license V2 exclusively. However, want/denn-VF structures are unknown. We present the results of a corpus study on the incidence of omdat-V2 in spoken Dutch, and compare them to published data on weil-V2 in spoken German. Basic findings: omdat-V2 is much less frequent than weil-V2 (ratio almost 1:8); and the frequency relations between coordinating and subordinating conjunctions are opposite (want >> omdat; denn << weil). We propose that conjunction selection and V2/VF selection proceed partly independently, and sometimes miscommunicate—e.g. yielding omdat/weil paired with V2. Want/denn-VF pairs do not occur because want/denn clauses are planned as autonomous sentences, which take V2 by default. We sketch a simple feedforward neural network with two layers of nodes (representing conjunctions and word orders, respectively) that can simulate the observed data pattern through inhibition-based competition of the alternative choices within the node layers.
  • Kempen, G. (1995). De mythe van het woordbeeld: Spellingherziening taalpsychologisch doorgelicht. Onze Taal, 64(11), 275-277.
  • Kempen, G. (1995). Drinken eten mij Nim. Intermediair, 31(19), 41-45.
  • Kempen, G. (1995). 'Hier spreekt men Nederlands'. EMNET: Nieuwsbrief Elektronische Media, 22, 1.
  • Kempen, G., & Harbusch, K. (2003). An artificial opposition between grammaticality and frequency: Comment on Bornkessel, Schlesewsky & Friederici (2002). Cognition, 90(2), 205-210 [Rectification on p. 215]. doi:10.1016/S0010-0277(03)00145-8.

    Abstract

    In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim “show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing” (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.
  • Kempen, G. (1999). Fiets en (centri)fuge. Onze Taal, 68, 88.
  • Kempen, G. (1995). IJ of Y? Onze Taal, 64(9), 205-206.
  • Kempen, G. (1979). La mise en paroles, aspects psychologiques de l'expression orale. Études de Linguistique Appliquée, 33, 19-28.

    Abstract

    Remarques sur les facteurs intervenant dans le processus de formulation des énoncés.
  • Kempen, G., & Kolk, H. (1986). Het voortbrengen van normale en agrammatische taal. Van Horen Zeggen, 27(2), 36-40.
  • Kempen, G. (1995). Processing discontinuous lexical items: A reply to Frazier. Cognition, 55, 219-221. doi:10.1016/0010-0277(94)00657-7.

    Abstract

    Comments on a study by Frazier and others on Dutch-language lexical processing. Claims that the control condition in the experiment was inadequate and that an assumption made by Frazier about closed class verbal items is inaccurate, and proposes an alternative account of a subset of the data from the experiment
  • Kempen, G. (1995). Processing separable complex verbs in Dutch: Comments on Frazier, Flores d'Arcais, and Coolen (1993). Cognition, 54, 353-356. doi:10.1016/0010-0277(94)00649-6.

    Abstract

    Raises objections to L. Frazier et al's (see record 1994-32229-001) report of an experimental study intended to test Schreuder's (1990) Morphological Integration (MI) model concerning the processing of separable and inseparable verbs and shows that the logic of the experiment is flawed. The problem is rooted in the notion of a separable complex verb. The conclusion is drawn that Frazier et al's experimental data cannot be taken as evidence for the theoretical propositions they develop about the MI model.

Share this page