Publications

Displaying 301 - 400 of 1084
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P. (2003). How the brain solves the binding problem for language: A neurocomputational model of syntactic processing. NeuroImage, 20(suppl. 1), S18-S29. doi:10.1016/j.neuroimage.2003.09.013.

    Abstract

    Syntax is one of the components in the architecture of language processing that allows the listener/reader to bind single-word information into a unified interpretation of multiword utterances. This paper discusses ERP effects that have been observed in relation to syntactic processing. The fact that these effects differ from the semantic N400 indicates that the brain honors the distinction between semantic and syntactic binding operations. Two models of syntactic processing attempt to account for syntax-related ERP effects. One type of model is serial, with a first phase that is purely syntactic in nature (syntax-first model). The other type of model is parallel and assumes that information immediately guides the interpretation process once it becomes available. This is referred to as the immediacy model. ERP evidence is presented in support of the latter model. Next, an explicit computational model is proposed to explain the ERP data. This Unification Model assumes that syntactic frames are stored in memory and retrieved on the basis of the spoken or written word form input. The syntactic frames associated with the individual lexical items are unified by a dynamic binding process into a structural representation that spans the whole utterance. On the basis of a meta-analysis of imaging studies on syntax, it is argued that the left posterior inferior frontal cortex is involved in binding syntactic frames together, whereas the left superior temporal cortex is involved in retrieval of the syntactic frames stored in memory. Lesion data that support the involvement of this left frontotemporal network in syntactic processing are discussed.
  • Hagoort, P., Hald, L. A., Bastiaansen, M. C. M., & Petersson, K. M. (2004). Integration of word meaning and world knowledge in language comprehension. Science, 304(5669), 438-441. doi:10.1126/science.1095455.

    Abstract

    Although the sentences that we hear or read have meaning, this does not necessarily mean that they are also true. Relatively little is known about the critical brain structures for, and the relative time course of, establishing the meaning and truth of linguistic expressions. We present electroencephalogram data that show the rapid parallel integration of both semantic and world
    knowledge during the interpretation of a sentence. Data from functional magnetic resonance imaging revealed that the left inferior prefrontal cortex is involved in the integration of both meaning and world knowledge. Finally, oscillatory brain responses indicate that the brain keeps a record of what makes a sentence hard to interpret.
  • Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15(6), 883-899. doi:10.1162/089892903322370807.

    Abstract

    This study investigated the effects of combined semantic and syntactic violations in relation to the effects of single semantic and single syntactic violations on language-related event-related brain potential (ERP) effects (N400 and P600/ SPS). Syntactic violations consisted of a mismatch in grammatical gender or number features of the definite article and the noun in sentence-internal or sentence-final noun phrases (NPs). Semantic violations consisted of semantically implausible adjective–noun combinations in the same NPs. Combined syntactic and semantic violations were a summation of these two respective violation types. ERPs were recorded while subjects read the sentences with the different types of violations and the correct control sentences. ERP effects were computed relative to ERPs elicited by the sentence-internal or sentence-final nouns. The size of the N400 effect to the semantic violation was increased by an additional syntactic violation (the syntactic boost). In contrast, the size of the P600/ SPS to the syntactic violation was not affected by an additional semantic violation. This suggests that in the absence of syntactic ambiguity, the assignment of syntactic structure is independent of semantic context. However, semantic integration is influenced by syntactic processing. In the sentence-final position, additional global processing consequences were obtained as a result of earlier violations in the sentence. The resulting increase in the N400 amplitude to sentence-final words was independent of the nature of the violation. A speeded anomaly detection task revealed that it takes substantially longer to detect semantic than syntactic anomalies. These results are discussed in relation to the latency and processing characteristics of the N400 and P600/SPS effects. Overall, the results reveal an asymmetry in the interplay between syntax and semantics during on-line sentence comprehension.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136-141. doi:10.1016/j.conb.2014.07.013.

    Abstract

    Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke–Lichtheim–Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.
  • Hagoort, P., Brown, C. M., & Swaab, T. Y. (1995). Semantic deficits in right hemisphere patients. Brain and Language, 51, 161-163. doi:10.1006/brln.1995.1058.
  • Hagoort, P., & Indefrey, P. (2014). The neurobiology of language beyond single words. Annual Review of Neuroscience, 37, 347-362. doi:10.1146/annurev-neuro-071013-013847.

    Abstract

    A hallmark of human language is that we combine lexical building blocks retrieved from memory in endless new ways. This combinatorial aspect of language is referred to as unification. Here we focus on the neurobiological infrastructure for syntactic and semantic unification. Unification is characterized by a high-speed temporal profile including both prediction and integration of retrieved lexical elements. A meta-analysis of numerous neuroimaging studies reveals a clear dorsal/ventral gradient in both left inferior frontal cortex and left posterior temporal cortex, with dorsal foci for syntactic processing and ventral foci for semantic processing. In addition to core areas for unification, further networks need to be recruited to realize language-driven communication to its full extent. One example is the theory of mind network, which allows listeners and readers to infer the intended message (speaker meaning) from the coded meaning of the linguistic utterance. This indicates that sensorimotor simulation cannot handle all of language processing.
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Hagoort, P., & Özyürek, A. (2024). Extending the architecture of language from a multimodal perspective. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12728.

    Abstract

    Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation.
  • Hahn, L. E., Benders, T., Fikkert, P., & Snijders, T. M. (2021). Infants’ implicit rhyme perception in child songs and its relationship with vocabulary. Frontiers in Psychology, 12: 680882. doi:10.3389/fpsyg.2021.680882.

    Abstract

    Rhyme perception is an important predictor for future literacy. Assessing rhyme
    abilities, however, commonly requires children to make explicit rhyme judgements on
    single words. Here we explored whether infants already implicitly process rhymes in
    natural rhyming contexts (child songs) and whether this response correlates with later
    vocabulary size. In a passive listening ERP study, 10.5 month-old Dutch infants were
    exposed to rhyming and non-rhyming child songs. Two types of rhyme effects were
    analysed: (1) ERPs elicited by the first rhyme occurring in each song (rhyme sensitivity)
    and (2) ERPs elicited by rhymes repeating after the first rhyme in each song (rhyme
    repetition). Only for the latter a tentative negativity for rhymes from 0 to 200 ms
    after the onset of the rhyme word was found. This rhyme repetition effect correlated
    with productive vocabulary at 18 months-old, but not with any other vocabulary
    measure (perception at 10.5 or 18 months-old). While awaiting future replication, the
    study indicates precursors of phonological awareness already during infancy and with
    ecologically valid linguistic stimuli.
  • Hammarstroem, H., & Güldemann, T. (2014). Quantifying geographical determinants of large-scale distributions of linguistic features. Language Dynamics and Change, 4, 87-115. doi:10.1163/22105832-00401002.

    Abstract

    In the recent past the work on large-scale linguistic distributions across the globe has intensified considerably. Work on macro-areal relationships in Africa (Güldemann, 2010) suggests that the shape of convergence areas may be determined by climatic factors and geophysical features such as mountains, water bodies, coastlines, etc. Worldwide data is now available for geophysical features as well as linguistic features, including numeral systems and basic constituent order. We explore the possibility that the shape of areal aggregations of individual features in these two linguistic domains correlates with Köppen-Geiger climate zones. Furthermore, we test the hypothesis that the shape of such areal feature aggregations is determined by the contour of adjacent geophysical features like mountain ranges or coastlines. In these first basic tests, we do not find clear evidence that either Köppen-Geiger climate zones or the contours of geophysical features are major predictors for the linguistic data at hand

    Files private

    Request files
  • Hammarstroem, H., & Donohue, M. (2014). Some principles on the use of macro-areas in typological comparison. Language Dynamics and Change, 4, 167-187. doi:10.1163/22105832-00401001.

    Abstract

    While the notion of the ‘area’ or ‘Sprachbund’ has a long history in linguistics, with geographically-defined regions frequently cited as a useful means to explain typological distributions, the problem of delimiting areas has not been well addressed. Lists of general-purpose, largely independent ‘macro-areas’ (typically continent size) have been proposed as a step to rule out contact as an explanation for various large-scale linguistic phenomena. This squib points out some problems in some of the currently widely-used predetermined areas, those found in the World Atlas of Language Structures (Haspelmath et al., 2005). Instead, we propose a principled division of the world’s landmasses into six macro-areas that arguably have better geographical independence properties
  • Hammarström, H. (2014). [Review of the book A grammar of the great Andamanese language: An ethnolinguistic study by Anvita Abbi]. Journal of South Asian Languages and Linguistics, 1, 111-116. doi:10.1515/jsall-2014-0007.
  • Harmon, Z., & Kapatsinski, V. (2021). A theory of repetition and retrieval in language production. Psychological Review, 128, 1112-1144. doi:10.1037/rev0000305.

    Abstract

    Repetition appears to be part of error correction and action preparation in all domains that involve producing an action sequence. The present work contends that the ubiquity of repetition is due to its role in resolving a problem inherent to planning and retrieval of action sequences: the Problem of Retrieval. Repetitions occur when the production to perform next is not activated enough to be executed. Repetitions are helpful in this situation because the repeated action sequence activates the likely continuation. We model a corpus of natural speech using a recurrent network, with words as units of production. We show that repeated material makes upcoming words more predictable, especially when more than one word is repeated. Speakers are argued to produce multiword repetitions by using backward associations to reactivate recently produced words. The existence of multiword repetitions means that speakers must decide where to reinitiate execution from. We show that production restarts from words that have seldom occurred in a predictive preceding-word context and have often occurred utterance-initially. These results are explained by competition between preceding-context and top-down cues over the course of language learning. The proposed theory improves on structural accounts of repetition disfluencies, and integrates repetition disfluencies in language production with repetitions observed in other domains of skilled action.
  • Hartung, F., Wang, Y., Mak, M., Willems, R. M., & Chatterjee, A. (2021). Aesthetic appraisals of literary style and emotional intensity in narrative engagement are neurally dissociable. Communications Biology, 4: 1401. doi:10.1038/s42003-021-02926-0.

    Abstract

    Humans are deeply affected by stories, yet it is unclear how. In this study, we explored two aspects of aesthetic experiences during narrative engagement - literariness and narrative fluctuations in appraised emotional intensity. Independent ratings of literariness and emotional intensity of two literary stories were used to predict blood-oxygen-level-dependent signal changes in 52 listeners from an existing fMRI dataset. Literariness was associated with increased activation in brain areas linked to semantic integration (left angular gyrus, supramarginal gyrus, and precuneus), and decreased activation in bilateral middle temporal cortices, associated with semantic representations and word memory. Emotional intensity correlated with decreased activation in a bilateral frontoparietal network that is often associated with controlled attention. Our results confirm a neural dissociation in processing literary form and emotional content in stories and generate new questions about the function of and interaction between attention, social cognition, and semantic systems during literary engagement and aesthetic experiences.
  • Haun, D. B. M. (2003). What's so special about spatial cognition. De Psychonoom, 18, 3-4.
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2014). Children conform the behavior of peers; Other great apes stick with what they know. Psychological Science, 25, 2160-2167. doi:10.1177/0956797614553235.

    Abstract

    All primates learn things from conspecifics socially, but it is not clear whether they conform to the behavior of these conspecifics—if conformity is defined as overriding individually acquired behavioral tendencies in order to copy peers’ behavior. In the current study, chimpanzees, orangutans, and 2-year-old human children individually acquired a problem-solving strategy. They then watched several conspecific peers demonstrate an alternative strategy. The children switched to this new, socially demonstrated strategy in roughly half of all instances, whereas the other two great-ape species almost never adjusted their behavior to the majority’s. In a follow-up study, children switched much more when the peer demonstrators were still present than when they were absent, which suggests that their conformity arose at least in part from social motivations. These results demonstrate an important difference between the social learning of humans and great apes, a difference that might help to account for differences in human and nonhuman cultures

    Additional information

    Haun_Rekers_Tomasello_2014_supp.pdf
  • Hayano, K. (2004). Kaiwa ni okeru ninshikiteki ken’i no koushou: Shuujoshi yo, ne, odoroki hyouji no bunpu to kinou [Negotiation of Epistemic Authority in Conversation: on the use of final particles yo, ne and surprise markers]. Studies in Pragmatics, 6, 17-28.
  • Hayano, K. (2003). Self-presentation as a face-threatening act: A comparative study of self-oriented topic introduction in English and Japanese. Veritas, 24, 45-58.
  • Healthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M. and 67 moreHealthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M., Füllbrunn, S., Gayet, S., Van Gelder, M. M. H. J., Van Gerven, M., Geurts, S., Greven, C. U., Groefsema, M., Haak, K., Hagoort, P., Hartman, Y., Van der Heijden, B., Hermans, E., Heuvelmans, V., Hintz, F., Den Hollander, J., Hulsman, A. M., Idesis, S., Jaeger, M., Janse, E., Janzing, J., Kessels, R. P. C., Karremans, J. C., De Kleijn, W., Klein, M., Klumpers, F., Kohn, N., Korzilius, H., Krahmer, B., De Lange, F., Van Leeuwen, J., Liu, H., Luijten, M., Manders, P., Manevska, K., Marques, J. P., Matthews, J., McQueen, J. M., Medendorp, P., Melis, R., Meyer, A. S., Oosterman, J., Overbeek, L., Peelen, M., Popma, J., Postma, G., Roelofs, K., Van Rossenberg, Y. G. T., Schaap, G., Scheepers, P., Selen, L., Starren, M., Swinkels, D. W., Tendolkar, I., Thijssen, D., Timmerman, H., Tutunji, R., Tuladhar, A., Veling, H., Verhagen, M., Verkroost, J., Vink, J., Vriezekolk, V., Vrijsen, J., Vyrastekova, J., Van der Wal, S., Willems, R. M., & Willemsen, A. (2021). Protocol of the Healthy Brain Study: An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context. PLoS One, 16(12): e0260952. doi:10.1371/journal.pone.0260952.

    Abstract

    The endeavor to understand the human brain has seen more progress in the last few decades than in the previous two millennia. Still, our understanding of how the human brain relates to behavior in the real world and how this link is modulated by biological, social, and environmental factors is limited. To address this, we designed the Healthy Brain Study (HBS), an interdisciplinary, longitudinal, cohort study based on multidimensional, dynamic assessments in both the laboratory and the real world. Here, we describe the rationale and design of the currently ongoing HBS. The HBS is examining a population-based sample of 1,000 healthy participants (age 30-39) who are thoroughly studied across an entire year. Data are collected through cognitive, affective, behavioral, and physiological testing, neuroimaging, bio-sampling, questionnaires, ecological momentary assessment, and real-world assessments using wearable devices. These data will become an accessible resource for the scientific community enabling the next step in understanding the human brain and how it dynamically and individually operates in its bio-social context. An access procedure to the collected data and bio-samples is in place and published on https://www.healthybrainstudy.nl/en/data-and-methods.

    https://www.trialregister.nl/trial/7955

    Additional information

    supplementary material
  • Hegemann, L., Corfield, E. C., Askelund, A. D., Allegrini, A. G., Askeland, R. B., Ronald, A., Ask, H., St Pourcain, B., Andreassen, O. A., Hannigan, L. J., & Havdahl, A. (2024). Genetic and phenotypic heterogeneity in early neurodevelopmental traits in the Norwegian Mother, Father and Child Cohort Study. Molecular Autism, 15: 25. doi:10.1186/s13229-024-00599-0.

    Abstract

    Background
    Autism and different neurodevelopmental conditions frequently co-occur, as do their symptoms at sub-diagnostic threshold levels. Overlapping traits and shared genetic liability are potential explanations.

    Methods
    In the population-based Norwegian Mother, Father, and Child Cohort study (MoBa), we leverage item-level data to explore the phenotypic factor structure and genetic architecture underlying neurodevelopmental traits at age 3 years (N = 41,708–58,630) using maternal reports on 76 items assessing children’s motor and language development, social functioning, communication, attention, activity regulation, and flexibility of behaviors and interests.

    Results
    We identified 11 latent factors at the phenotypic level. These factors showed associations with diagnoses of autism and other neurodevelopmental conditions. Most shared genetic liabilities with autism, ADHD, and/or schizophrenia. Item-level GWAS revealed trait-specific genetic correlations with autism (items rg range = − 0.27–0.78), ADHD (items rg range = − 0.40–1), and schizophrenia (items rg range = − 0.24–0.34). We find little evidence of common genetic liability across all neurodevelopmental traits but more so for several genetic factors across more specific areas of neurodevelopment, particularly social and communication traits. Some of these factors, such as one capturing prosocial behavior, overlap with factors found in the phenotypic analyses. Other areas, such as motor development, seemed to have more heterogenous etiology, with specific traits showing a less consistent pattern of genetic correlations with each other.

    Conclusions
    These exploratory findings emphasize the etiological complexity of neurodevelopmental traits at this early age. In particular, diverse associations with neurodevelopmental conditions and genetic heterogeneity could inform follow-up work to identify shared and differentiating factors in the early manifestations of neurodevelopmental traits and their relation to autism and other neurodevelopmental conditions. This in turn could have implications for clinical screening tools and programs.
  • Heidlmayr, K., Ferragne, E., & Isel, F. (2021). Neuroplasticity in the phonological system: The PMN and the N400 as markers for the perception of non-native phonemic contrasts by late second language learners. Neuropsychologia, 156: 107831. doi:10.1016/j.neuropsychologia.2021.107831.

    Abstract

    Second language (L2) learners frequently encounter persistent difficulty in perceiving certain non-native sound contrasts, i.e., a phenomenon called “phonological deafness”. However, if extensive L2 experience leads to neuroplastic changes in the phonological system, then the capacity to discriminate non-native phonemic contrasts should progressively improve. Such perceptual changes should be attested by modifications at the neurophysiological level. We designed an EEG experiment in which the listeners’ perceptual capacities to discriminate second language phonemic contrasts influence the processing of lexical-semantic violations. Semantic congruency of critical words in a sentence context was driven by a phonemic contrast that was unique to the L2, English (e.g.,/ɪ/-/i:/, ship – sheep). Twenty-eight young adult native speakers of French with intermediate proficiency in English listened to sentences that contained either a semantically congruent or incongruent critical word (e.g., The anchor of the ship/*sheep was let down) while EEG was recorded. Three ERP effects were found to relate to increasing L2 proficiency: (1) a left frontal auditory N100 effect, (2) a smaller fronto-central phonological mismatch negativity (PMN) effect and (3) a semantic N400 effect. No effect of proficiency was found on oscillatory markers. The current findings suggest that neuronal plasticity in the human brain allows for the late acquisition of even hard-wired linguistic features such as the discrimination of phonemic contrasts in a second language. This is the first time that behavioral and neurophysiological evidence for the critical role of neural plasticity underlying L2 phonological processing and its interdependence with semantic processing has been provided. Our data strongly support the idea that pieces of information from different levels of linguistic processing (e.g., phonological, semantic) strongly interact and influence each other during online language processing.

    Additional information

    supplementary material
  • Heim, F., Scharff, C., Fisher, S. E., Riebel, K., & Ten Cate, C. (2024). Auditory discrimination learning and acoustic cue weighing in female zebra finches with localized FoxP1 knockdowns. Journal of Neurophysiology, 131, 950-963. doi:10.1152/jn.00228.2023.

    Abstract

    Rare disruptions of the transcription factor FOXP1 are implicated in a human neurodevelopmental disorder characterized by autism and/or intellectual disability with prominent problems in speech and language abilities. Avian orthologues of this transcription factor are evolutionarily conserved and highly expressed in specific regions of songbird brains, including areas associated with vocal production learning and auditory perception. Here, we investigated possible contributions of FoxP1 to song discrimination and auditory perception in juvenile and adult female zebra finches. They received lentiviral knockdowns of FoxP1 in one of two brain areas involved in auditory stimulus processing, HVC (proper name) or CMM (caudomedial mesopallium). Ninety-six females, distributed over different experimental and control groups were trained to discriminate between two stimulus songs in an operant Go/Nogo paradigm and subsequently tested with an array of stimuli. This made it possible to assess how well they recognized and categorized altered versions of training stimuli and whether localized FoxP1 knockdowns affected the role of different features during discrimination and categorization of song. Although FoxP1 expression was significantly reduced by the knockdowns, neither discrimination of the stimulus songs nor categorization of songs modified in pitch, sequential order of syllables or by reversed playback were affected. Subsequently, we analyzed the full dataset to assess the impact of the different stimulus manipulations for cue weighing in song discrimination. Our findings show that zebra finches rely on multiple parameters for song discrimination, but with relatively more prominent roles for spectral parameters and syllable sequencing as cues for song discrimination.

    NEW & NOTEWORTHY In humans, mutations of the transcription factor FoxP1 are implicated in speech and language problems. In songbirds, FoxP1 has been linked to male song learning and female preference strength. We found that FoxP1 knockdowns in female HVC and caudomedial mesopallium (CMM) did not alter song discrimination or categorization based on spectral and temporal information. However, this large dataset allowed to validate different cue weights for spectral over temporal information for song recognition.
  • Henry, M. J., Cook, P. F., de Reus, K., Nityananda, V., Rouse, A. A., & Kotz, S. A. (2021). An ecological approach to measuring synchronization abilities across the animal kingdom. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200336. doi:10.1098/rstb.2020.0336.

    Abstract

    In this perspective paper, we focus on the study of synchronization abilities across the animal kingdom. We propose an ecological approach to studying nonhuman animal synchronization that begins from observations about when, how and why an animal might synchronize spontaneously with natural environmental rhythms. We discuss what we consider to be the most important, but thus far largely understudied, temporal, physical, perceptual and motivational constraints that must be taken into account when designing experiments to test synchronization in nonhuman animals. First and foremost, different species are likely to be sensitive to and therefore capable of synchronizing at different timescales. We also argue that it is fruitful to consider the latent flexibility of animal synchronization. Finally, we discuss the importance of an animal's motivational state for showcasing synchronization abilities. We demonstrate that the likelihood that an animal can successfully synchronize with an environmental rhythm is context-dependent and suggest that the list of species capable of synchronization is likely to grow when tested with ecologically honest, species-tuned experiments.
  • Hersh, T., King, B., & Lutton, B. V. (2014). Novel bioinformatics tools for analysis of gene expression in the skate, Leucoraja erinacea. The Bulletin, MDI Biological Laboratory, 53, 16-18.
  • Hersh, T. A., Gero, S., Rendell, L., & Whitehead, H. (2021). Using identity calls to detect structure in acoustic datasets. Methods in Ecology and Evolution, 12(9), 1668-1678. doi:10.1111/2041-210X.13644.

    Abstract

    Acoustic analyses can be powerful tools for illuminating structure within and between populations, especially for cryptic or difficult to access taxa. Acoustic repertoires are often compared using aggregate similarity measures across all calls of a particular type, but specific group identity calls may more clearly delineate structure in some taxa.
    2. We present a new method—the identity call method—that estimates the number of acoustically distinct subdivisions in a set of repertoires and identifies call types that characterize those subdivisions. The method uses contaminated mixture models to identify call types, assigning each call a probability of belonging to each type. Repertoires are hierarchically clustered based on similarities in call type usage, producing a dendrogram with ‘identity clades’ of repertoires and the
    ‘identity calls’ that best characterize each clade. We validated this approach using acoustic data from sperm whales, grey-breasted wood-wrens and Australian field crickets, and ran a suite of tests to assess parameter sensitivity.
    3. For all taxa, the method detected diagnostic signals (identity calls) and structure (identity clades; sperm whale subpopulations, wren subspecies and cricket species) that were consistent with past research. Some datasets were more sensitive to parameter variation than others, which may reflect real uncertainty or biological variability in the taxa examined. We recommend that users perform comparative analyses of different parameter combinations to determine which portions of the dendrogram warrant careful versus confident interpretation.
    4. The presence of group-characteristic identity calls does not necessarily mean animals perceive them as such. Fine-scale experiments like playbacks are a key next step to understand call perception and function. This method can help inform such studies by identifying calls that may be salient to animals and are good candidates for investigation or playback stimuli. For cryptic or difficult to access taxa with group-specific calls, the identity call method can aid managers in quantifying behavioural diversity and/or identifying putative structure within and between
    populations, given that acoustic data can be inexpensive and minimally invasive to collect.
  • Hersh, T. A., Ravignani, A., & Whitehead, H. (2024). Cetaceans are the next frontier for vocal rhythm research. PNAS, 121(25): e2313093121. doi:10.1073/pnas.2313093121.

    Abstract

    While rhythm can facilitate and enhance many aspects of behavior, its evolutionary trajectory in vocal communication systems remains enigmatic. We can trace evolutionary processes by investigating rhythmic abilities in different species, but research to date has largely focused on songbirds and primates. We present evidence that cetaceans—whales, dolphins, and porpoises—are a missing piece of the puzzle for understanding why rhythm evolved in vocal communication systems. Cetaceans not only produce rhythmic vocalizations but also exhibit behaviors known or thought to play a role in the evolution of different features of rhythm. These behaviors include vocal learning abilities, advanced breathing control, sexually selected vocal displays, prolonged mother–infant bonds, and behavioral synchronization. The untapped comparative potential of cetaceans is further enhanced by high interspecific diversity, which generates natural ranges of vocal and social complexity for investigating various evolutionary hypotheses. We show that rhythm (particularly isochronous rhythm, when sounds are equally spaced in time) is prevalent in cetacean vocalizations but is used in different contexts by baleen and toothed whales. We also highlight key questions and research areas that will enhance understanding of vocal rhythms across taxa. By coupling an infraorder-level taxonomic assessment of vocal rhythm production with comparisons to other species, we illustrate how broadly comparative research can contribute to a more nuanced understanding of the prevalence, evolution, and possible functions of rhythm in animal communication.

    Additional information

    supporting information
  • Hervais-Adelman, A., Pefkou, M., & Golestani, N. (2014). Bilingual speech-in-noise: Neural bases of semantic context use in the native language. Brain and Language, 132, 1-6. doi:10.1016/j.bandl.2014.01.009.

    Abstract

    Bilingual listeners comprehend speech-in-noise better in their native than non-native language. This native-language benefit is thought to arise from greater use of top-down linguistic information to assist degraded speech comprehension. Using functional magnetic resonance imaging, we recently showed that left angular gyrus activation is modulated when semantic context is used to assist native language speech-in-noise comprehension (Golestani, Hervais-Adelman, Obleser, & Scott, 2013). Here, we extend the previous work, by reanalyzing the previous data alongside the results obtained in the non-native language of the same late bilingual participants. We found a behavioral benefit of semantic context in processing speech-in-noise in the native language only, and the imaging results also revealed a native language context effect in the left angular gyrus. We also find a complementary role of lower-level auditory regions during stimulus-driven processing. Our findings help to elucidate the neural basis of the established native language behavioral benefit of speech-in-noise processing. (C) 2014 Elsevier Inc. All rights reserved.
  • Hessels, R. S., Hooge, I., Snijders, T. M., & Kemner, C. (2014). Is there a limit to the superiority of individuals with ASD in visual search? Journal of Autism and Developmental Disorders, 44, 443-451. doi:10.1007/s10803-013-1886-8.

    Abstract

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an explanation, formulated in terms of load theory. We suggest that there is a limit to the superiority in visual search for individuals with ASD, related to the perceptual load of the stimuli. When perceptual load becomes so high that no additional task-(ir)relevant information can be processed, performance will be based on single stimulus identification, in which no differences between individuals with ASD and controls have been demonstrated
  • Heyselaar, E., Peeters, D., & Hagoort, P. (2021). Do we predict upcoming speech content in naturalistic environments? Language, Cognition and Neuroscience, 36(4), 440-461. doi:10.1080/23273798.2020.1859568.

    Abstract

    The ability to predict upcoming actions is a hallmark of cognition. It remains unclear, however, whether the predictive behaviour observed in controlled lab environments generalises to rich, everyday settings. In four virtual reality experiments, we tested whether a well-established marker of linguistic prediction (anticipatory eye movements) replicated when increasing the naturalness of the paradigm by means of immersing participants in naturalistic scenes (Experiment 1), increasing the number of distractor objects (Experiment 2), modifying the proportion of predictable noun-referents (Experiment 3), and manipulating the location of referents relative to the joint attentional space (Experiment 4). Robust anticipatory eye movements were observed for Experiments 1–3. The anticipatory effect disappeared, however, in Experiment 4. Our findings suggest that predictive processing occurs in everyday communication if the referents are situated in the joint attentional space. Methodologically, our study confirms that ecological validity and experimental control may go hand-in-hand in the study of human predictive behaviour.
  • Hintz, F., McQueen, J. M., & Meyer, A. S. (2024). Using psychometric network analysis to examine the components of spoken word recognition. Journal of Cognition, 7(1): 10. doi:10.5334/joc.340.

    Abstract

    Using language requires access to domain-specific linguistic representations, but also draws on domain-general cognitive skills. A key issue in current psycholinguistics is to situate linguistic processing in the network of human cognitive abilities. Here, we focused on spoken word recognition and used an individual differences approach to examine the links of scores in word recognition tasks with scores on tasks capturing effects of linguistic experience, general processing speed, working memory, and non-verbal reasoning. 281 young native speakers of Dutch completed an extensive test battery assessing these cognitive skills. We used psychometric network analysis to map out the direct links between the scores, that is, the unique variance between pairs of scores, controlling for variance shared with the other scores. The analysis revealed direct links between word recognition skills and processing speed. We discuss the implications of these results and the potential of psychometric network analysis for studying language processing and its embedding in the broader cognitive system.

    Additional information

    network analysis of dataset A and B
  • Hintz, F., & Meyer, A. S. (Eds.). (2024). Individual differences in language skills [Special Issue]. Journal of Cognition, 7(1).
  • Hintz, F., Shkaravska, O., Dijkhuis, M., Van 't Hoff, V., Huijsmans, M., Van Dongen, R. C., Voeteé, L. A., Trilsbeek, P., McQueen, J. M., & Meyer, A. S. (2024). IDLaS-NL – A platform for running customized studies on individual differences in Dutch language skills via the internet. Behavior Research Methods, 56(3), 2422-2436. doi:10.3758/s13428-023-02156-8.

    Abstract

    We introduce the Individual Differences in Language Skills (IDLaS-NL) web platform, which enables users to run studies on individual differences in Dutch language skills via the internet. IDLaS-NL consists of 35 behavioral tests, previously validated in participants aged between 18 and 30 years. The platform provides an intuitive graphical interface for users to select the tests they wish to include in their research, to divide these tests into different sessions and to determine their order. Moreover, for standardized administration the platform
    provides an application (an emulated browser) wherein the tests are run. Results can be retrieved by mouse click in the graphical interface and are provided as CSV-file output via email. Similarly, the graphical interface enables researchers to modify and delete their study configurations. IDLaS-NL is intended for researchers, clinicians, educators and in general anyone conducting fundaental research into language and general cognitive skills; it is not intended for diagnostic purposes. All platform services are free of charge. Here, we provide a
    description of its workings as well as instructions for using the platform. The IDLaS-NL platform can be accessed at www.mpi.nl/idlas-nl.
  • Hoedemaker, R. S., & Gordon, P. C. (2014). Embodied language comprehension: Encoding-based and goal-driven processes. Journal of Experimental Psychology: General, 143(2), 914-929. doi:10.1037/a0032348.

    Abstract

    Theories of embodied language comprehension have proposed that language is understood through perceptual simulation of the sensorimotor characteristics of its meaning. Strong support for this claim requires demonstration of encoding-based activation of sensorimotor representations that is distinct from task-related or goal-driven processes. Participants in 3 eye-tracking experiments were presented with triplets of either numbers or object and animal names. In Experiment 1, participants indicated whether the size of the referent of the middle object or animal name was in between the size of the 2 outer items. In Experiment 2, the object and animal names were encoded for an immediate recognition memory task. In Experiment 3, participants completed the same comparison task of Experiment 1 for both words and numbers. During the comparison tasks, word and number decision times showed a symbolic distance effect, such that response time was inversely related to the size difference between the items. A symbolic distance effect was also observed for animal and object encoding times in cases where encoding time likely reflected some goal-driven processes as well. When semantic size was irrelevant to the task (Experiment 2), it had no effect on word encoding times. Number encoding times showed a numerical distance priming effect: Encoding time increased with numerical difference between items. Together these results suggest that while activation of numerical magnitude representations is encoding-based as well as goal-driven, activation of size information associated with words is goal-driven and does not occur automatically during encoding. This conclusion challenges strong theories of embodied cognition which claim that language comprehension consists of activation of analog sensorimotor representations irrespective of higher level processes related to context or task-specific goals
  • Hoedemaker, R. S., & Gordon, P. C. (2014). It takes time to prime: Semantic priming in the ocular lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 40(6), 2179-2197. doi:10.1037/a0037677.

    Abstract

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (τ), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT.
  • Hoeksema, N., Verga, L., Mengede, J., Van Roessel, C., Villanueva, S., Salazar-Casals, A., Rubio-Garcia, A., Curcic-Blake, B., Vernes, S. C., & Ravignani, A. (2021). Neuroanatomy of the grey seal brain: Bringing pinnipeds into the neurobiological study of vocal learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200252. doi:10.1098/rstb.2020.0252.

    Abstract

    Comparative studies of vocal learning and vocal non-learning animals can increase our understanding of the neurobiology and evolution of vocal learning and human speech. Mammalian vocal learning is understudied: most research has either focused on vocal learning in songbirds or its absence in non-human primates. Here we focus on a highly promising model species for the neurobiology of vocal learning: grey seals. We provide a neuroanatomical atlas (based on dissected brain slices and magnetic resonance images), a labelled MRI template, a 3D model with volumetric measurements of brain regions, and histological cortical stainings. Four main features of the grey seal brain stand out. (1) It is relatively big and highly convoluted. (2) It hosts a relatively large temporal lobe and cerebellum, structures which could support developed timing abilities and acoustic processing. (3) The cortex is similar to humans in thickness and shows the expected six-layered mammalian structure. (4) Expression of FoxP2 - a gene involved in vocal learning and spoken language - is present in deeper layers of the cortex. Our results could facilitate future studies targeting the neural and genetic underpinnings of mammalian vocal learning, thus bridging the research gap from songbirds to humans and non-human primates.Competing Interest StatementThe authors have declared no competing interest.
  • Hoey, E. (2014). Sighing in interaction: Somatic, semiotic, and social. Research on Language and Social Interaction, 47(2), 175-200. doi:10.1080/08351813.2014.900229.

    Abstract

    Participants in interaction routinely orient to gaze, bodily comportment, and nonlexical vocalizations as salient for developing an analysis of the unfolding course of action. In this article, I address the respiratory phenomenon of sighing, the aim being to describe sighing as a situated practice that contributes to the achievement of particular actions in interaction. I report on the various actions sighs implement or construct and how their positioning and delivery informs participants’ understandings of their significance for interaction. Data are in American English
  • Hoey, E., Hömke, P., Löfgren, E., Neumann, T., Schuerman, W. L., & Kendrick, K. H. (2021). Using expletive insertion to pursue and sanction in interaction. Journal of Sociolinguistics, 25(1), 3-25. doi:10.1111/josl.12439.

    Abstract

    This article uses conversation analysis to examine constructions like who the fuck is that—sequence‐initiating actions into which an expletive like the fuck has been inserted. We describe how this turn‐constructional practice fits into and constitutes a recurrent sequence of escalating actions. In this sequence, it is used to pursue an adequate response after an inadequate one was given, and sanction the recipient for that inadequate response. Our analysis contributes to sociolinguistic studies of swearing by offering an account of swearing as a resource for social action.
  • Hogan-Brown, A. L., Hoedemaker, R. S., Gordon, P. C., & Losh, M. (2014). Eye-voice span during rapid automatized naming: Evidence of reduced automaticity in individuals with autism spectrum disorder and their siblings. Journal of Neurodevelopmental Disorders, 6(1): 33. doi:10.1186/1866-1955-6-33.

    Abstract

    Background: Individuals with autism spectrum disorder (ASD) and their parents demonstrate impaired performance in rapid automatized naming (RAN), a task that recruits a variety of linguistic and executive processes. Though the basic processes that contribute to RAN differences remain unclear, eye-voice relationships, as measured through eye tracking, can provide insight into cognitive and perceptual processes contributing to RAN performance. For example, in RAN, eye-voice span (EVS), the distance ahead the eyes are when articulation of a target item's label begins, is an indirect measure of automaticity of the processes underlying RAN. The primary objective of this study was to investigate automaticity in naming processes, as indexed by EVS during RAN. The secondary objective was to characterize RAN difficulties in individuals with ASD and their siblings. Methods: Participants (aged 15 – 33 years) included 21 individuals with ASD, 23 siblings of individuals with ASD, and 24 control subjects, group-matched on chronological age. Naming time, frequency of errors, and EVS were measured during a RAN task and compared across groups. Results: A stepwise pattern of RAN performance was observed, with individuals with ASD demonstrating the slowest naming across all RAN conditions, controls demonstrating the fastest naming, and siblings demonstrating intermediate performance. Individuals with ASD exhibited smaller EVSs than controls on all RAN conditions, and siblings exhibited smaller EVSs during number naming (the most highly automatized type of naming). EVSs were correlated with naming times in controls only, and only in the more automatized conditions. Conclusions: These results suggest that reduced automaticity in the component processes of RAN may underpin differences in individuals with ASD and their siblings. These findings also provide further support that RAN abilities are impacted by genetic liability to ASD. This study has important implications for understanding the underlying skills contributing to language-related deficits in ASD.
  • Holler, J., & Beattie, G. (2003). How iconic gestures and speech interact in the representation of meaning: are both aspects really integral to the process? Semiotica, 146, 81-116.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
  • Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3, 127-154.
  • Holler, J., Alday, P. M., Decuyper, C., Geiger, M., Kendrick, K. H., & Meyer, A. S. (2021). Competition reduces response times in multiparty conversation. Frontiers in Psychology, 12: 693124. doi:10.3389/fpsyg.2021.693124.

    Abstract

    Natural conversations are characterized by short transition times between turns. This holds in particular for multi-party conversations. The short turn transitions in everyday conversations contrast sharply with the much longer speech onset latencies observed in laboratory studies where speakers respond to spoken utterances. There are many factors that facilitate speech production in conversational compared to laboratory settings. Here we highlight one of them, the impact of competition for turns. In multi-party conversations, speakers often compete for turns. In quantitative corpus analyses of multi-party conversation, the fastest response determines the recorded turn transition time. In contrast, in dyadic conversations such competition for turns is much less likely to arise, and in laboratory experiments with individual participants it does not arise at all. Therefore, all responses tend to be recorded. Thus, competition for turns may reduce the recorded mean turn transition times in multi-party conversations for a simple statistical reason: slow responses are not included in the means. We report two studies illustrating this point. We first report the results of simulations showing how much the response times in a laboratory experiment would be reduced if, for each trial, instead of recording all responses, only the fastest responses of several participants responding independently on the trial were recorded. We then present results from a quantitative corpus analysis comparing turn transition times in dyadic and triadic conversations. There was no significant group size effect in question-response transition times, where the present speaker often selects the next one, thus reducing competition between speakers. But, as predicted, triads showed shorter turn transition times than dyads for the remaining turn transitions, where competition for the floor was more likely to arise. Together, these data show that turn transition times in conversation should be interpreted in the context of group size, turn transition type, and social setting.
  • Hoogman, M., Guadalupe, T., Zwiers, M. P., Klarenbeek, P., Francks, C., & Fisher, S. E. (2014). Assessing the effects of common variation in the FOXP2 gene on human brain structure. Frontiers in Human Neuroscience, 8: 473. doi:10.3389/fnhum.2014.00473.

    Abstract

    The FOXP2 transcription factor is one of the most well-known genes to have been implicated in developmental speech and language disorders. Rare mutations disrupting the function of this gene have been described in different families and cases. In a large three-generation family carrying a missense mutation, neuroimaging studies revealed significant effects on brain structure and function, most notably in the inferior frontal gyrus, caudate nucleus and cerebellum. After the identification of rare disruptive FOXP2 variants impacting on brain structure, several reports proposed that common variants at this locus may also have detectable effects on the brain, extending beyond disorder into normal phenotypic variation. These neuroimaging genetics studies used groups of between 14 and 96 participants. The current study assessed effects of common FOXP2 variants on neuroanatomy using voxel-based morphometry and volumetric techniques in a sample of >1300 people from the general population. In a first targeted stage we analyzed single nucleotide polymorphisms (SNPs) claimed to have effects in prior smaller studies (rs2253478, rs12533005, rs2396753, rs6980093, rs7784315, rs17137124, rs10230558, rs7782412, rs1456031), beginning with regions proposed in the relevant papers, then assessing impact across the entire brain. In the second gene-wide stage, we tested all common FOXP2 variation, focusing on volumetry of those regions most strongly implicated from analyses of rare disruptive mutations. Despite using a sample that is more than ten times that used for prior studies of common FOXP2 variation, we found no evidence for effects of SNPs on variability in neuroanatomy in the general population. Thus, the impact of this gene on brain structure may be largely limited to extreme cases of rare disruptive alleles. Alternatively, effects of common variants at this gene exist but are too subtle to be detected with standard volumetric techniques
  • Hope, T. M. H., Neville, D., Talozzi, L., Foulon, C., Forkel, S. J., Thiebaut de Schotten, M., & Price, C. J. (2024). Testing the disconnectome symptom discoverer model on out-of-sample post-stroke language outcomes. Brain, 147(2), e11-e13. doi:10.1093/brain/awad352.

    Abstract

    Stroke is common, and its consequent brain damage can cause various cognitive impairments. Associations between where and how much brain lesion damage a patient has suffered, and the particular impairments that injury has caused (lesion-symptom associations) offer potentially compelling insights into how the brain implements cognition.1 A better understanding of those associations can also fill a gap in current stroke medicine by helping us to predict how individual patients might recover from post-stroke impairments.2 Most recent work in this area employs machine learning models trained with data from stroke patients whose mid-to-long-term outcomes are known.2-4 These machine learning models are tested by predicting new outcomes—typically scores on standardized tests of post-stroke impairment—for patients whose data were not used to train the model. Traditionally, these validation results have been shared in peer-reviewed publications describing the model and its training. But recently, and for the first time in this field (as far as we know), one of these pre-trained models has been made public—The Disconnectome Symptom Discoverer model (DSD) which draws its predictors from structural disconnection information inferred from stroke patients’ brain MRI.5

    Here, we test the DSD model on wholly independent data, never seen by the model authors, before they published it. Specifically, we test whether its predictive performance is just as accurate as (i.e. not significantly worse than) that reported in the original (Washington University) dataset, when predicting new patients’ outcomes at a similar time post-stroke (∼1 year post-stroke) and also in another independent sample tested later (5+ years) post-stroke. A failure to generalize the DSD model occurs if it performs significantly better in the Washington data than in our data from patients tested at a similar time point (∼1 year post-stroke). In addition, a significant decrease in predictive performance for the more chronic sample would be evidence that lesion-symptom associations differ at ∼1 year post-stroke and >5 years post-stroke.
  • Horan Skilton, A., & Peeters, D. (2021). Cross-linguistic differences in demonstrative systems: Comparing spatial and non-spatial influences on demonstrative use in Ticuna and Dutch. Journal of Pragmatics, 180, 248-265. doi:10.1016/j.pragma.2021.05.001.

    Abstract

    In all spoken languages, speakers use demonstratives – words like this and that – to refer to entities in their immediate environment. But which factors determine whether they use one demonstrative (this) or another (that)? Here we report the results of an experiment examining the effects of referent visibility, referent distance, and addressee location on the production of demonstratives by speakers of Ticuna (isolate; Brazil, Colombia, Peru), an Amazonian language with four demonstratives, and speakers of Dutch (Indo-European; Netherlands, Belgium), which has two demonstratives. We found that Ticuna speakers’ use of demonstratives displayed effects of addressee location and referent distance, but not referent visibility. By contrast, under comparable conditions, Dutch speakers displayed sensitivity only to referent distance. Interestingly, we also observed that Ticuna speakers consistently used demonstratives in all referential utterances in our experimental paradigm, while Dutch speakers strongly preferred to use definite articles. Taken together, these findings shed light on the significant diversity found in demonstrative systems across languages. Additionally, they invite researchers studying exophoric demonstratives to broaden their horizons by cross-linguistically investigating the factors involved in speakers’ choice of demonstratives over other types of referring expressions, especially articles.
  • Horemans, I., & Schiller, N. O. (2004). Form-priming effects in nonword naming. Brain and Language, 90(1-3), 465-469. doi:10.1016/S0093-934X(03)00457-7.

    Abstract

    Form-priming effects from sublexical (syllabic or segmental) primes in masked priming can be accounted for in two ways. One is the sublexical pre-activation view according to which segments are pre-activated by the prime, and at the time the form-related target is to be produced, retrieval/assembly of those pre-activated segments is faster compared to an unrelated situation. However, it has also been argued that form-priming effects from sublexical primes might be due to lexical pre-activation. When the sublexical prime is presented, it activates all form-related words (i.e., cohorts) in the lexicon, necessarily including the form-related target, which—as a consequence—is produced faster than in the unrelated case. Note, however, that this lexical pre-activation account makes previous pre-lexical activation of segments necessary. This study reports a nonword naming experiment to investigate whether or not sublexical pre-activation is involved in masked form priming with sublexical primes. The results demonstrated a priming effect suggesting a nonlexical effect. However, this does not exclude an additional lexical component in form priming.
  • Hörpel, S. G., Baier, L., Peremans, H., Reijniers, J., Wiegrebe, L., & Firzlaff, U. (2021). Communication breakdown: Limits of spectro-temporal resolution for the perception of bat communication calls. Scientific Reports, 11: 13708. doi:10.1038/s41598-021-92842-4.

    Abstract

    During vocal communication, the spectro‑temporal structure of vocalizations conveys important
    contextual information. Bats excel in the use of sounds for echolocation by meticulous encoding of
    signals in the temporal domain. We therefore hypothesized that for social communication as well,
    bats would excel at detecting minute distortions in the spectro‑temporal structure of calls. To test
    this hypothesis, we systematically introduced spectro‑temporal distortion to communication calls of
    Phyllostomus discolor bats. We broke down each call into windows of the same length and randomized
    the phase spectrum inside each window. The overall degree of spectro‑temporal distortion in
    communication calls increased with window length. Modelling the bat auditory periphery revealed
    that cochlear mechanisms allow discrimination of fast spectro‑temporal envelopes. We evaluated
    model predictions with experimental psychophysical and neurophysiological data. We first assessed
    bats’ performance in discriminating original versions of calls from increasingly distorted versions of
    the same calls. We further examined cortical responses to determine additional specializations for
    call discrimination at the cortical level. Psychophysical and cortical responses concurred with model
    predictions, revealing discrimination thresholds in the range of 8–15 ms randomization‑window
    length. Our data suggest that specialized cortical areas are not necessary to impart psychophysical
    resilience to temporal distortion in communication calls.

    Additional information

    supplementary information
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Hoymann, G. (2014). [Review of the book Bridging the language gap, Approaches to Herero verbal interaction as development practice in Namibia by Rose Marie Beck]. Journal of African languages and linguistics, 35(1), 130-133. doi:10.1515/jall-2014-0004.
  • Hoymann, G. (2004). [Review of the book Botswana: The future of the minority languages ed. by Herman M. Batibo and Birgit Smieja]. Journal of African Languages and Linguistics, 25(2), 171-173. doi:10.1515/jall.2004.25.2.171.
  • De Hoyos, L., Barendse, M. T., Schlag, F., Van Donkelaar, M. M. J., Verhoef, E., Shapland, C. Y., Klassmann, A., Buitelaar, J., Verhulst, B., Fisher, S. E., Rai, D., & St Pourcain, B. (2024). Structural models of genome-wide covariance identify multiple common dimensions in autism. Nature Communications, 15: 1770. doi:10.1038/s41467-024-46128-8.

    Abstract

    Common genetic variation has been associated with multiple symptoms in Autism Spectrum Disorder (ASD). However, our knowledge of shared genetic factor structures contributing to this highly heterogeneous neurodevelopmental condition is limited. Here, we developed a structural equation modelling framework to directly model genome-wide covariance across core and non-core ASD phenotypes, studying autistic individuals of European descent using a case-only design. We identified three independent genetic factors most strongly linked to language/cognition, behaviour and motor development, respectively, when studying a population-representative sample (N=5,331). These analyses revealed novel associations. For example, developmental delay in acquiring personal-social skills was inversely related to language, while developmental motor delay was linked to self-injurious behaviour. We largely confirmed the three-factorial structure in independent ASD-simplex families (N=1,946), but uncovered simplex-specific genetic overlap between behaviour and language phenotypes. Thus, the common genetic architecture in ASD is multi-dimensional and contributes, in combination with ascertainment-specific patterns, to phenotypic heterogeneity.
  • Huettig, F., & Mishra, R. K. (2014). How literacy acquisition affects the illiterate mind - A critical examination of theories and evidence. Language and Linguistics Compass, 8(10), 401-427. doi:10.1111/lnc3.12092.

    Abstract

    At present, more than one-fifth of humanity is unable to read and write. We critically examine experimental evidence and theories of how (il)literacy affects the human mind. In our discussion we show that literacy has significant cognitive consequences that go beyond the processing of written words and sentences. Thus, cultural inventions such as reading shape general cognitive processing in non-trivial ways. We suggest that this has important implications for educational policy and guidance as well as research into cognitive processing and brain functioning.
  • Huettig, F., & Hulstijn, J. (2024). The Enhanced Literate Mind Hypothesis. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12731.

    Abstract

    In the present paper we describe the Enhanced Literate Mind (ELM) hypothesis. As individuals learn to read and write, they are, from then on, exposed to extensive written-language input and become literate. We propose that acquisition and proficient processing of written language (‘literacy’) leads to, both, increased language knowledge as well as enhanced language and non-language (perceptual and cognitive) skills. We also suggest that all neurotypical native language users, including illiterate, low literate, and high literate individuals, share a Basic Language Cognition (BLC) in the domain of oral informal language. Finally, we discuss the possibility that the acquisition of ELM leads to some degree of ‘knowledge parallelism’ between BLC and ELM in literate language users, which has implications for empirical research on individual and situational differences in spoken language processing.
  • Huisman, J. L. A., van Hout, R., & Majid, A. (2021). Patterns of semantic variation differ across body parts: evidence from the Japonic languages. Cognitive Linguistics, 32, 455-486. doi:10.1515/cog-2020-0079.

    Abstract

    The human body is central to myriad metaphors, so studying the conceptualisation of the body itself is critical if we are to understand its broader use. One essential but understudied issue is whether languages differ in which body parts they single out for naming. This paper takes a multi-method approach to investigate body part nomenclature within a single language family. Using both a naming task (Study 1) and colouring-in task (Study 2) to collect data from six Japonic languages, we found that lexical similarity for body part terminology was notably differentiated within Japonic, and similar variation was evident in semantics too. Novel application of cluster analysis on naming data revealed a relatively flat hierarchical structure for parts of the face, whereas parts of the body were organised with deeper hierarchical structure. The colouring data revealed that bounded parts show more stability across languages than unbounded parts. Overall, the data reveal there is not a single universal conceptualisation of the body as is often assumed, and that in-depth, multi-method explorations of under-studied languages are urgently required.
  • Huizeling, E., Wang, H., Holland, C., & Kessler, K. (2021). Changes in theta and alpha oscillatory signatures of attentional control in older and middle age. European Journal of Neuroscience, 54(1), 4314-4337. doi:10.1111/ejn.15259.

    Abstract

    Recent behavioural research has reported age-related changes in the costs of refocusing attention from a temporal (rapid serial visual presentation) to a spatial (visual search) task. Using magnetoencephalography, we have now compared the neural signatures of attention refocusing between three age groups (19–30, 40–49 and 60+ years) and found differences in task-related modulation and cortical localisation of alpha and theta oscillations. Efficient, faster refocusing in the youngest group compared to both middle age and older groups was reflected in parietal theta effects that were significantly reduced in the older groups. Residual parietal theta activity in older individuals was beneficial to attentional refocusing and could reflect preserved attention mechanisms. Slowed refocusing of attention, especially when a target required consolidation, in the older and middle-aged adults was accompanied by a posterior theta deficit and increased recruitment of frontal (middle-aged and older groups) and temporal (older group only) areas, demonstrating a posterior to anterior processing shift. Theta but not alpha modulation correlated with task performance, suggesting that older adults' stronger and more widely distributed alpha power modulation could reflect decreased neural precision or dedifferentiation but requires further investigation. Our results demonstrate that older adults present with different alpha and theta oscillatory signatures during attentional control, reflecting cognitive decline and, potentially, also different cognitive strategies in an attempt to compensate for decline.

    Additional information

    supplementary material
  • Hulten, A., Karvonen, L., Laine, M., & Salmelin, R. (2014). Producing speech with a newly learned morphosyntax and vocabulary: An MEG study. Journal of Cognitive Neuroscience, 26(8), 1721-1735. doi:10.1162/jocn_a_00558.
  • Humphries, S., Holler*, J., Crawford, T., & Poliakoff*, E. (2021). Cospeech gestures are a window into the effects of Parkinson’s disease on action representations. Journal of Experimental Psychology: General, 150(8), 1581-1597. doi:10.1037/xge0001002.

    Abstract

    -* indicates joint senior authors - Parkinson’s disease impairs motor function and cognition, which together affect language and
    communication. Co-speech gestures are a form of language-related actions that provide imagistic
    depictions of the speech content they accompany. Gestures rely on visual and motor imagery, but
    it is unknown whether gesture representations require the involvement of intact neural sensory
    and motor systems. We tested this hypothesis with a fine-grained analysis of co-speech action
    gestures in Parkinson’s disease. 37 people with Parkinson’s disease and 33 controls described
    two scenes featuring actions which varied in their inherent degree of bodily motion. In addition
    to the perspective of action gestures (gestural viewpoint/first- vs. third-person perspective), we
    analysed how Parkinson’s patients represent manner (how something/someone moves) and path
    information (where something/someone moves to) in gesture, depending on the degree of bodily
    motion involved in the action depicted. We replicated an earlier finding that people with
    Parkinson’s disease are less likely to gesture about actions from a first-person perspective – preferring instead to depict actions gesturally from a third-person perspective – and show that
    this effect is modulated by the degree of bodily motion in the actions being depicted. When
    describing high motion actions, the Parkinson’s group were specifically impaired in depicting
    manner information in gesture and their use of third-person path-only gestures was significantly
    increased. Gestures about low motion actions were relatively spared. These results inform our
    understanding of the neural and cognitive basis of gesture production by providing
    neuropsychological evidence that action gesture production relies on intact motor network
    function.

    Additional information

    Open data and code
  • Hustá, C., Zheng, X., Papoutsi, C., & Piai, V. (2021). Electrophysiological signatures of conceptual and lexical retrieval from semantic memory. Neuropsychologia, 161: 107988. doi:10.1016/j.neuropsychologia.2021.107988.

    Abstract

    Retrieval from semantic memory of conceptual and lexical information is essential for producing speech. It is unclear whether there are differences in the neural mechanisms of conceptual and lexical retrieval when spreading activation through semantic memory is initiated by verbal or nonverbal settings. The same twenty participants took part in two EEG experiments. The first experiment examined conceptual and lexical retrieval following nonverbal settings, whereas the second experiment was a replication of previous studies examining conceptual and lexical retrieval following verbal settings. Target pictures were presented after constraining and nonconstraining contexts. In the nonverbal settings, contexts were provided as two priming pictures (e.g., constraining: nest, feather; nonconstraining: anchor, lipstick; target picture: BIRD). In the verbal settings, contexts were provided as sentences (e.g., constraining: “The farmer milked a...”; nonconstraining: “The child drew a...”; target picture: COW). Target pictures were named faster following constraining contexts in both experiments, indicating that conceptual preparation starts before target picture onset in constraining conditions. In the verbal experiment, we replicated the alpha-beta power decreases in constraining relative to nonconstraining conditions before target picture onset. No such power decreases were found in the nonverbal experiment. Power decreases in constraining relative to nonconstraining conditions were significantly different between experiments. Our findings suggest that participants engage in conceptual preparation following verbal and nonverbal settings, albeit differently. The retrieval of a target word, initiated by verbal settings, is associated with alpha-beta power decreases. By contrast, broad conceptual preparation alone, prompted by nonverbal settings, does not seem enough to elicit alpha-beta power decreases. These findings have implications for theories of oscillations and semantic memory.

    Additional information

    1-s2.0-S0028393221002414-mmc1.pdf
  • Ille, S., Ohlerth, A.-K., Colle, D., Colle, H., Dragoy, O., Goodden, J., Robe, P., Rofes, A., Mandonnet, E., Robert, E., Satoer, D., Viegas, C., Visch-Brink, E., van Zandvoort, M., & Krieg, S. (2021). Augmented reality for the virtual dissection of white matter pathways. Acta Neurochirurgica, (4), 895-903. doi:10.1007/s00701-019-04159-x.

    Abstract

    Background The human white matter pathway network is complex and of critical importance for functionality. Thus, learning
    and understanding white matter tract anatomy is important for the training of neuroscientists and neurosurgeons. The study aims
    to test and evaluate a new method for fiber dissection using augmented reality (AR) in a group which is experienced in cadaver
    white matter dissection courses and in vivo tractography.
    Methods Fifteen neurosurgeons, neurolinguists, and neuroscientists participated in this questionnaire-based study. We presented
    five cases of patients with left-sided perisylvian gliomas who underwent awake craniotomy. Diffusion tensor imaging fiber
    tracking (DTI FT) was performed and the language-related networks were visualized separated in different tracts by color.
    Participants were able to virtually dissect the prepared DTI FTs using a spatial computer and AR goggles. The application
    was evaluated through a questionnaire with answers from 0 (minimum) to 10 (maximum).
    Results Participants rated the overall experience of AR fiber dissection with a median of 8 points (mean ± standard deviation 8.5 ± 1.4).
    Usefulness for fiber dissection courses and education in general was rated with 8 (8.3 ± 1.4) and 8 (8.1 ± 1.5) points, respectively.
    Educational value was expected to be high for several target audiences (student: median 9, 8.6 ± 1.4; resident: 9, 8.5 ± 1.8; surgeon: 9,
    8.2 ± 2.4; scientist: 8.5, 8.0 ± 2.4). Even clinical application of AR fiber dissection was expected to be of value with a median of 7
    points (7.0 ± 2.5)
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P. (2014). Time course of word production does not support a parallel input architecture. Language, Cognition and Neuroscience, 29(1), 33-34. doi:10.1080/01690965.2013.847191.

    Abstract

    Hickok's enterprise to unify psycholinguistic and motor control models is highly stimulating. Nonetheless, there are problems of the model with respect to the time course of neural activation in word production, the flexibility for continuous speech, and the need for non-motor feedback.

    Files private

    Request files
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Jadoul, Y., De Boer, B., & Ravignani, A. (2024). Parselmouth for bioacoustics: Automated acoustic analysis in Python. Bioacoustics, 33(1), 1-19. doi:10.1080/09524622.2023.2259327.

    Abstract

    Bioacoustics increasingly relies on large datasets and computational methods. The need to batch-process large amounts of data and the increased focus on algorithmic processing require software tools. To optimally assist in a bioacoustician’s workflow, software tools need to be as simple and effective as possible. Five years ago, the Python package Parselmouth was released to provide easy and intuitive access to all functionality in the Praat software. Whereas Praat is principally designed for phonetics and speech processing, plenty of bioacoustics studies have used its advanced acoustic algorithms. Here, we evaluate existing usage of Parselmouth and discuss in detail several studies which used the software library. We argue that Parselmouth has the potential to be used even more in bioacoustics research, and suggest future directions to be pursued with the help of Parselmouth.
  • Yu, X., Janse, E., & Schoonen, R. (2021). The effect of learning context on L2 listening development. Studies in Second Language Acquisition, 43(2), 329-354. doi:10.1017/S0272263120000534.

    Abstract

    Little research has been done on the effect of learning context on L2 listening development. Motivated by DeKeyser’s (2015) skill acquisition theory of second language acquisition, this study compares L2 listening development in study abroad (SA) and at home (AH) contexts from both language knowledge and processing perspectives. One hundred forty-nine Chinese postgraduates studying in either China or the United Kingdom participated in a battery of listening tasks at the beginning and at the end of an academic year. These tasks measure auditory vocabulary knowledge and listening processing efficiency (i.e., accuracy, speed, and stability of processing) in word recognition, grammatical processing, and semantic analysis. Results show that, provided equal starting levels, the SA learners made more progress than the AH learners in speed of processing across the language processing tasks, with less clear results for vocabulary acquisition. Studying abroad may be an effective intervention for L2 learning, especially in terms of processing speed.
  • Yu, X., Janse, E., & Schoonen, R. (2021). The effect of learning context on L2 listening development: Knowledge and processing. Studies in Second Language Acquisition, 43, 329-354. doi:10.1017/S0272263120000534.

    Abstract

    Little research has been done on the effect of learning context on L2 listening development. Motivated by DeKeyser’s (2015) skill acquisition theory of second language acquisition, this study compares L2 listening development in study abroad (SA) and at home (AH) contexts from both language knowledge and processing perspectives. One hundred forty-nine Chinese postgraduates studying in either China or the United Kingdom participated in a battery of listening tasks at the beginning and at the end of an academic year. These tasks measure auditory vocabulary knowledge and listening processing efficiency (i.e., accuracy, speed, and stability of processing) in word recognition, grammatical processing, and semantic analysis. Results show that, provided equal starting levels, the SA learners made more progress than the AH learners in speed of processing across the language processing tasks, with less clear results for vocabulary acquisition. Studying abroad may be an effective intervention for L2 learning, especially in terms of processing speed.
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E., & Andringa, S. J. (2021). The roles of cognitive abilities and hearing acuity in older adults’ recognition of words taken from fast and spectrally reduced speech. Applied Psycholinguistics, 42(3), 763-790. doi:10.1017/S0142716421000047.

    Abstract

    Previous literature has identified several cognitive abilities as predictors of individual differences in speech perception. Working memory was chief among them, but effects have also been found for processing speed. Most research has been conducted on speech in noise, but fast and unclear articulation also makes listening challenging, particularly for older listeners. As a first step toward specifying the cognitive mechanisms underlying spoken word recognition, we set up this study to determine which factors explain unique variation in word identification accuracy in fast speech, and the extent to which this was affected by further degradation of the speech signal. To that end, 105 older adults were tested on identification accuracy of fast words in unaltered and degraded conditions in which the speech stimuli were low-pass filtered. They were also tested on processing speed, memory, vocabulary knowledge, and hearing sensitivity. A structural equation analysis showed that only memory and hearing sensitivity explained unique variance in word recognition in both listening conditions. Working memory was more strongly associated with performance in the unfiltered than in the filtered condition. These results suggest that memory skills, rather than speed, facilitate the mapping of single words onto stored lexical representations, particularly in conditions of medium difficulty.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Janse, E., & Jesse, A. (2014). Working memory affects older adults’ use of context in spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 1842-1862. doi:10.1080/17470218.2013.879391.

    Abstract

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate, however, older listeners’ ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether working memory predicts older adults’ ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) mainly affected the speed of recognition, with only a marginal effect on detection accuracy. Contextual facilitation was modulated by older listeners’ working memory and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners’ immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

    Files private

    Request files
  • Jansen, N. A., Braden, R. O., Srivastava, S., Otness, E. F., Lesca, G., Rossi, M., Nizon, M., Bernier, R. A., Quelin, C., Van Haeringen, A., Kleefstra, T., Wong, M. M. K., Whalen, S., Fisher, S. E., Morgan, A. T., & Van Bon, B. W. (2021). Clinical delineation of SETBP1 haploinsufficiency disorder. European Journal of Human Genetics, 29, 1198 -1205. doi:10.1038/s41431-021-00888-9.

    Abstract

    SETBP1 haploinsufficiency disorder (MIM#616078) is caused by haploinsufficiency of SETBP1 on chromosome 18q12.3, but there has not yet been any systematic evaluation of the major features of this monogenic syndrome, assessing penetrance and expressivity. We describe the first comprehensive study to delineate the associated clinical phenotype, with findings from 34 individuals, including 24 novel cases, all of whom have a SETBP1 loss-of-function variant or single (coding) gene deletion, confirmed by molecular diagnostics. The most commonly reported clinical features included mild motor developmental delay, speech impairment, intellectual disability, hypotonia, vision impairment, attention/concentration deficits, and hyperactivity. Although there is a mild overlap in certain facial features, the disorder does not lead to a distinctive recognizable facial gestalt. As well as providing insight into the clinical spectrum of SETBP1 haploinsufficiency disorder, this reports puts forward care recommendations for patient management.

    Additional information

    supplementary table
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janssen, J., Díaz-Caneja, C. M., Alloza, C., Schippers, A., De Hoyos, L., Santonja, J., Gordaliza, P. M., Buimer, E. E. L., van Haren, N. E. M., Cahn, W., Arango, C., Kahn, R. S., Hulshoff Pol, H. E., & Schnack, H. G. (2021). Dissimilarity in sulcal width patterns in the cortex can be used to identify patients with schizophrenia with extreme deficits in cognitive performance. Schizophrenia Bulletin, 47(2), 552-561. doi:10.1093/schbul/sbaa131.

    Abstract

    Schizophrenia is a biologically complex disorder with multiple regional deficits in cortical brain morphology. In addition, interindividual heterogeneity of cortical morphological metrics is larger in patients with schizophrenia when compared to healthy controls. Exploiting interindividual differences in the severity of cortical morphological deficits in patients instead of focusing on group averages may aid in detecting biologically informed homogeneous subgroups. The person-based similarity index (PBSI) of brain morphology indexes an individual’s morphometric similarity across numerous cortical regions amongst a sample of healthy subjects. We extended the PBSI such that it indexes the morphometric similarity of an independent individual (eg, a patient) with respect to healthy control subjects. By employing a normative modeling approach on longitudinal data, we determined an individual’s degree of morphometric dissimilarity to the norm. We calculated the PBSI for sulcal width (PBSI-SW) in patients with schizophrenia and healthy control subjects (164 patients and 164 healthy controls; 656 magnetic resonance imaging scans) and associated it with cognitive performance and cortical sulcation index. A subgroup of patients with markedly deviant PBSI-SW showed extreme deficits in cognitive performance and cortical sulcation. Progressive reduction of PBSI-SW in the schizophrenia group relative to healthy controls was driven by these deviating individuals. By explicitly leveraging interindividual differences in the severity of PBSI-SW deficits, neuroimaging-driven subgrouping of patients is feasible. As such, our results pave the way for future applications of morphometric similarity indices for subtyping of clinical populations.

    Files private

    Request files
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Jara-Ettinger, J., & Rubio-Fernández, P. (2021). Quantitative mental state attributions in language understanding. Science Advances, 7: eabj0970. doi:10.1126/sciadv.abj0970.

    Abstract

    Human social intelligence relies on our ability to infer other people’s mental states such as their beliefs, desires,and intentions. While people are proficient at mental state inference from physical action, it is unknown whether people can make inferences of comparable granularity from simple linguistic events. Here, we show that people can make quantitative mental state attributions from simple referential expressions, replicating the fine-grained inferential structure characteristic of nonlinguistic theory of mind. Moreover, people quantitatively adjust these inferences after brief exposures to speaker-specific speech patterns. These judgments matched the predictions made by our computational model of theory of mind in language, but could not be explained by a simpler qualitative model that attributes mental states deductively. Our findings show how the connection between language and theory of mind runs deep, with their interaction showing in one of the most fundamental forms of human communication: reference.

    Additional information

    https://osf.io/h8qfy/
  • Jeltema, H., Ohlerth, A.-K., de Wit, A., Wagemakers, M., Rofes, A., Bastiaanse, R., & Drost, G. (2021). Comparing navigated transcranial magnetic stimulation mapping and "gold standard" direct cortical stimulation mapping in neurosurgery: a systematic review. Neurosurgical Review, (4), 1903-1920. doi:10.1007/s10143-020-01397-x.

    Abstract

    The objective of this systematic review is to create an overview of the literature on the comparison of navigated transcranial magnetic stimulation (nTMS) as a mapping tool to the current gold standard, which is (intraoperative) direct cortical stimulation (DCS) mapping. A search in the databases of PubMed, EMBASE, and Web of Science was performed. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and recommendations were used. Thirty-five publications were included in the review, describing a total of 552 patients. All studies concerned either mapping of motor or language function. No comparative data for nTMS and DCS for other neurological functions were found. For motor mapping, the distances between the cortical representation of the different muscle groups identified by nTMS and DCS varied between 2 and 16 mm. Regarding mapping of language function, solely an object naming task was performed in the comparative studies on nTMS and DCS. Sensitivity and specificity ranged from 10 to 100% and 13.3–98%, respectively, when nTMS language mapping was compared with DCS mapping. The positive predictive value (PPV) and negative predictive value (NPV) ranged from 17 to 75% and 57–100% respectively. The available evidence for nTMS as a mapping modality for motor and language function is discussed.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jesse, A., & McQueen, J. M. (2014). Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 793-808. doi:10.1080/17470218.2013.834371.

    Abstract

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Jones, G., Cabiddu, F., Andrews, M., & Rowland, C. F. (2021). Chunks of phonological knowledge play a significant role in children’s word learning and explain effects of neighborhood size, phonotactic probability, word frequency and word length. Journal of Memory and Language, 119: 104232. doi:10.1016/j.jml.2021.104232.

    Abstract

    A key omission from many accounts of children’s early word learning is the linguistic knowledge that the child has acquired up to the point when learning occurs. We simulate this knowledge using a computational model that learns phoneme and word sequence knowledge from naturalistic language corpora. We show how this simple model is able to account for effects of word length, word frequency, neighborhood density and phonotactic probability on children’s early word learning. Moreover, we show how effects of neighborhood density and phonotactic probability on word learning are largely influenced by word length, with our model being able to capture all effects. We then use predictions from the model to show how the ease by which a child learns a new word from maternal input is directly influenced by the phonological knowledge that the child has acquired from other words up to the point of encountering the new word. There are major implications of this work: models and theories of early word learning need to incorporate existing sublexical and lexical knowledge in explaining developmental change while well-established indices of word learning are rejected in favor of phonological knowledge of varying grain sizes.

    Additional information

    supplementary data research data
  • Jongman, S. R., Khoe, Y. H., & Hintz, F. (2021). Vocabulary size influences spontaneous speech in native language users: Validating the use of automatic speech recognition in individual differences research. Language and Speech, 64(1), 35-51. doi:10.1177/0023830920911079.

    Abstract

    Previous research has shown that vocabulary size affects performance on laboratory word production tasks. Individuals who know many words show faster lexical access and retrieve more words belonging to pre-specified categories than individuals who know fewer words. The present study examined the relationship between receptive vocabulary size and speaking skills as assessed in a natural sentence production task. We asked whether measures derived from spontaneous responses to every-day questions correlate with the size of participants’ vocabulary. Moreover, we assessed the suitability of automatic speech recognition for the analysis of participants’ responses in complex language production data. We found that vocabulary size predicted indices of spontaneous speech: Individuals with a larger vocabulary produced more words and had a higher speech-silence ratio compared to individuals with a smaller vocabulary. Importantly, these relationships were reliably identified using manual and automated transcription methods. Taken together, our results suggest that spontaneous speech elicitation is a useful method to investigate natural language production and that automatic speech recognition can alleviate the burden of labor-intensive speech transcription.
  • Jordens, P. (2004). Systematiek en dynamiek bij de verwerving van Finietheid. Toegepaste Taalwetenschap in Artikelen, 71, 9-22.

    Abstract

    In early Dutch learner varieties, there is no evidence of finiteness being a functional category. There is no V2nd: no correlation between inflectional morphology and movement. Initially, learners express the illocutive function of finiteness through the use of illocutive markers, with the non-use of an illocutive marker expressing the default illocutive function of assertion. Illocutive markers are functioning as adjuncts with scope over the predicate. Illocutive markers become re-analysed as functional elements.The driving force is the acquisition of the auxiliary verbs that occur with past participles. It leads to a reanalysis of illocutive markers as two separate elements: an auxiliary verb and a scope adverb. The (modal) auxiliary carries illocutive function. Lexical verb-argument structure (including the external argument) occurs within the domain of the auxiliary verb. The predicate as the focus constituent occurs within the domain of a scope adverb. This reanalysis establishes a position for the external argument within the domain of AUX. The acquisition of AUX causes the acquisition of a (hierarchical) structure with a complement as a constituent which represents an underlying verb-argument structure, a predicate as the domain of elements that are in focus, and an external (specifier) position as a landing site for elements with topic function.
  • Junge, C., & Cutler, A. (2014). Early word recognition and later language skills. Brain sciences, 4(4), 532-559. doi:10.3390/brainsci4040532.

    Abstract

    Recent behavioral and electrophysiological evidence has highlighted the long-term importance for language skills of an early ability to recognize words in continuous speech. We here present further tests of this long-term link in the form of follow-up studies conducted with two (separate) groups of infants who had earlier participated in speech segmentation tasks. Each study extends prior follow-up tests: Study 1 by using a novel follow-up measure that taps into online processing, Study 2 by assessing language performance relationships over a longer time span than previously tested. Results of Study 1 show that brain correlates of speech segmentation ability at 10 months are positively related to 16-month-olds’ target fixations in a looking-while-listening task. Results of Study 2 show that infant speech segmentation ability no longer directly predicts language profiles at the age of five. However, a meta-analysis across our results and those of similar studies (Study 3) reveals that age at follow-up does not moderate effect size. Together, the results suggest that infants’ ability to recognize words in speech certainly benefits early vocabulary development; further observed relationships of later language skills to early word recognition may be consequent upon this vocabulary size effect.
  • Junge, C., Cutler, A., & Hagoort, P. (2014). Successful word recognition by 10-month-olds given continuous speech both at initial exposure and test. Infancy, 19(2), 179-193. doi:10.1111/infa.12040.

    Abstract

    Most words that infants hear occur within fluent speech. To compile a vocabulary, infants therefore need to segment words from speech contexts. This study is the first to investigate whether infants (here: 10-month-olds) can recognize words when both initial exposure and test presentation are in continuous speech. Electrophysiological evidence attests that this indeed occurs: An increased extended negativity (word recognition effect) appears for familiarized target words relative to control words. This response proved constant at the individual level: Only infants who showed this negativity at test had shown such a response, within six repetitions after first occurrence, during familiarization.
  • Kakimoto, N., Wongratwanich, P., Shimamoto, H., Kitisubkanchana, J., Tsujimoto, T., Shimabukuro, K., Verdonschot, R. G., Hasegawa, Y., & Murakami, S. (2024). Comparison of T2 values of the displaced unilateral disc and retrodiscal tissue of temporomandibular joints and their implications. Scientific Reports, 14: 1705. doi:10.1038/s41598-024-52092-6.

    Abstract

    Unilateral anterior disc displacement (uADD) has been shown to affect the contralateral joints qualitatively. This study aims to assess the quantitative T2 values of the articular disc and retrodiscal tissue of patients with uADD at 1.5 Tesla (T). The study included 65 uADD patients and 17 volunteers. The regions of interest on T2 maps were evaluated. The affected joints demonstrated significantly higher articular disc T2 values (31.5 ± 3.8 ms) than those of the unaffected joints (28.9 ± 4.5 ms) (P < 0.001). For retrodiscal tissue, T2 values of the unaffected (37.8 ± 5.8 ms) and affected joints (41.6 ± 7.1 ms) were significantly longer than those of normal volunteers (34.4 ± 3.2 ms) (P < 0.001). Furthermore, uADD without reduction (WOR) joints (43.3 ± 6.8 ms) showed statistically higher T2 values than the unaffected joints of both uADD with reduction (WR) (33.9 ± 3.8 ms) and uADDWOR (38.9 ± 5.8 ms), and the affected joints of uADDWR (35.8 ± 4.4 ms). The mean T2 value of the unaffected joints of uADDWOR was significantly longer than that of healthy volunteers (P < 0.001). These results provided quantitative evidence for the influence of the affected joints on the contralateral joints.
  • Kapteijns, B., & Hintz, F. (2021). Comparing predictors of sentence self-paced reading times: Syntactic complexity versus transitional probability metrics. PLoS One, 16(7): e0254546. doi:10.1371/journal.pone.0254546.

    Abstract

    When estimating the influence of sentence complexity on reading, researchers typically opt for one of two main approaches: Measuring syntactic complexity (SC) or transitional probability (TP). Comparisons of the predictive power of both approaches have yielded mixed results. To address this inconsistency, we conducted a self-paced reading experiment. Participants read sentences of varying syntactic complexity. From two alternatives, we selected the set of SC and TP measures, respectively, that provided the best fit to the self-paced reading data. We then compared the contributions of the SC and TP measures to reading times when entered into the same model. Our results showed that both measures explained significant portions of variance in self-paced reading times. Thus, researchers aiming to measure sentence complexity should take both SC and TP into account. All of the analyses were conducted with and without control variables known to influence reading times (word/sentence length, word frequency and word position) to showcase how the effects of SC and TP change in the presence of the control variables.

    Additional information

    supporting information
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2024). Morphosyntactic predictive processing in adult heritage speakers: Effects of cue availability and spoken and written language experience. Language, Cognition and Neuroscience, 39(1), 118-135. doi:10.1080/23273798.2023.2254424.

    Abstract

    We investigated prediction skills of adult heritage speakers and the role of written and spoken language experience on predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in verb-medial and verb-final sentences in Turkish with adult Turkish heritage speakers (N = 25) and Turkish monolingual speakers (N = 24). Heritage speakers predicted in verb-medial sentences (when verb-semantic and case-marking cues were available), but not in verb-final sentences (when only case-marking cues were available) while monolinguals predicted in both. Prediction skills of heritage speakers were modulated by their spoken language experience in Turkish and written language experience in both languages. Overall, these results strongly suggest that verb-semantic information is needed to scaffold the use of morphosyntactic cues for prediction in heritage speakers. The findings also support the notion that both spoken and written language experience play an important role in predictive spoken language processing.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2021). Effects and non-effects of late language exposure on spatial language development: Evidence from deaf adults and children. Language Learning and Development, 17(1), 1-25. doi:10.1080/15475441.2020.1823846.

    Abstract

    Late exposure to the first language, as in the case of deaf children with hearing parents, hinders the production of linguistic expressions, even in adulthood. Less is known about the development of language soon after language exposure and if late exposure hinders all domains of language in children and adults. We compared late signing adults and children (MAge = 8;5) 2 years after exposure to sign language, to their age-matched native signing peers in expressions of two types of locative relations that are acquired in certain cognitive-developmental order: view-independent (IN-ON-UNDER) and view-dependent (LEFT-RIGHT). Late signing children and adults differed from native signers in their use of linguistic devices for view-dependent relations but not for view-independent relations. These effects were also modulated by the morphological complexity. Hindering effects of late language exposure on the development of language in children and adults are not absolute but are modulated by cognitive and linguistic complexity.
  • Karadöller, D. Z., Peeters, D., Manhardt, F., Özyürek, A., & Ortega, G. (2024). Iconicity and gesture jointly facilitate learning of second language signs at first exposure in hearing non-signers. Language Learning. Advance online publication. doi:10.1111/lang.12636.

    Abstract

    When learning a spoken second language (L2), words overlapping in form and meaning with one’s native language (L1) help break into the new language. When non-signing speakers learn a sign language as L2, such forms are absent because of the modality differences (L1:speech, L2:sign). In such cases, non-signing speakers might use iconic form-meaning mappings in signs or their own gestural experience as gateways into the to-be-acquired sign language. Here, we investigated how both these factors may contribute jointly to the acquisition of sign language vocabulary by hearing non-signers. Participants were presented with three types of sign in NGT (Sign Language of the Netherlands): arbitrary signs, iconic signs with high or low gesture overlap. Signs that were both iconic and highly overlapping with gestures boosted learning most at first exposure, and this effect remained the day after. Findings highlight the influence of modality-specific factors supporting the acquisition of a signed lexicon.
  • Karsan, Ç., Ocak, F., & Bulut, T. (2024). Characterization of speech and language phenotype in the 8p23.1 syndrome. European Child & Adolescent Psychiatry. Advance online publication. doi:10.1007/s00787-024-02448-0.

    Abstract

    The 8p23.1 duplication syndrome is a rare genetic condition with an estimated prevalence rate of 1 out of 58,000. Although the syndrome was associated with speech and language delays, a comprehensive assessment of speech and language functions has not been undertaken in this population. To address this issue, the present study reports rigorous speech and language, in addition to oral-facial and developmental, assessment of a 50-month-old Turkish-speaking boy who was diagnosed with the 8p23.1 duplication syndrome. Standardized tests of development, articulation and phonology, receptive and expressive language and a language sample analysis were administered to characterize speech and language skills in the patient. The language sample was obtained in an ecologically valid, free play and conversation context. The language sample was then analyzed and compared to a database of age-matched typically-developing children (n = 33) in terms of intelligibility, morphosyntax, semantics/vocabulary, discourse, verbal facility and percentage of errors at word and utterance levels. The results revealed mild to severe problems in articulation and phonology, receptive and expressive language skills, and morphosyntax (mean length of utterance in morphemes). Future research with larger sample sizes and employing detailed speech and language assessment is needed to delineate the speech and language profile in individuals with the 8p23.1 duplication syndrome, which will guide targeted speech and language interventions.

Share this page