Publications

Displaying 201 - 300 of 709
  • Franke, B., Hoogman, M., Vasquez, A. A., Heister, J., Savelkoul, P., Naber, M., Scheffer, H., Kiemeney, L., Kan, C., Kooij, J., & Buitelaar, J. (2008). Association of the dopamine transporter (SLC6A3/DAT1) gene 9-6 haplotype with adult ADHD. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 147, 1576-1579. doi:10.1002/ajmg.b.30861.

    Abstract

    ADHD is a neuropsychiatric disorder characterized by chronic hyperactivity, inattention and impulsivity, which affects about 5% of school-age children. ADHD persists into adulthood in at least 15% of cases. It is highly heritable and familial influences seem strongest for ADHD persisting into adulthood. However, most of the genetic research in ADHD has been carried out in children with the disorder. The gene that has received most attention in ADHD genetics is SLC6A3/DAT1 encoding the dopamine transporter. In the current study we attempted to replicate in adults with ADHD the reported association of a 10–6 SLC6A3-haplotype, formed by the 10-repeat allele of the variable number of tandem repeat (VNTR) polymorphism in the 3′ untranslated region of the gene and the 6-repeat allele of the VNTR in intron 8 of the gene, with childhood ADHD. In addition, we wished to explore the role of a recently described VNTR in intron 3 of the gene. Two hundred sixteen patients and 528 controls were included in the study. We found a 9–6 SLC6A3-haplotype, rather than the 10–6 haplotype, to be associated with ADHD in adults. The intron 3 VNTR showed no association with adult ADHD. Our findings converge with earlier reports and suggest that age is an important factor to be taken into account when assessing the association of SLC6A3 with ADHD. If confirmed in other studies, the differential association of the gene with ADHD in children and in adults might imply that SLC6A3 plays a role in modulating the ADHD phenotype, rather than causing it
  • Friederici, A. D., & Levelt, W. J. M. (1986). Cognitive processes of spatial coordinate assignment: On weighting perceptual cues. Naturwissenschaften, 73, 455-458.
  • Ganushchak, L. Y., & Schiller, N. O. (2008). Brain error-monitoring activity is affected by semantic relatedness: An event-related brain potentials study. Journal of Cognitive Neuroscience, 20(5), 927-940. doi:10.1162/jocn.2008.20514.

    Abstract

    Speakers continuously monitor what they say. Sometimes, self-monitoring malfunctions and errors pass undetected and uncorrected. In the field of action monitoring, an event-related brain potential, the error-related negativity (ERN), is associated with error processing. The present study relates the ERN to verbal self-monitoring and investigates how the ERN is affected by auditory distractors during verbal monitoring. We found that the ERN was largest following errors that occurred after semantically related distractors had been presented, as compared to semantically unrelated ones. This result demonstrates that the ERN is sensitive not only to response conflict resulting from the incompatibility of motor responses but also to more abstract lexical retrieval conflict resulting from activation of multiple lexical entries. This, in turn, suggests that the functioning of the verbal self-monitoring system during speaking is comparable to other performance monitoring, such as action monitoring.
  • Ganushchak, L. Y., & Schiller, N. O. (2008). Motivation and semantic context affect brain error-monitoring activity: An event-related brain potentials study. NeuroImage, 39, 395-405. doi:10.1016/j.neuroimage.2007.09.001.

    Abstract

    During speech production, we continuously monitor what we say. In
    situations in which speech errors potentially have more severe
    consequences, e.g. during a public presentation, our verbal selfmonitoring
    system may pay special attention to prevent errors than in
    situations in which speech errors are more acceptable, such as a casual
    conversation. In an event-related potential study, we investigated
    whether or not motivation affected participants’ performance using a
    picture naming task in a semantic blocking paradigm. Semantic
    context of to-be-named pictures was manipulated; blocks were
    semantically related (e.g., cat, dog, horse, etc.) or semantically
    unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated
    independently by monetary reward. The motivation manipulation did
    not affect error rate during picture naming. However, the highmotivation
    condition yielded increased amplitude and latency values of
    the error-related negativity (ERN) compared to the low-motivation
    condition, presumably indicating higher monitoring activity. Furthermore,
    participants showed semantic interference effects in reaction
    times and error rates. The ERN amplitude was also larger during
    semantically related than unrelated blocks, presumably indicating that
    semantic relatedness induces more conflict between possible verbal
    responses.
  • García Lecumberri, M. L., Cooke, M., Cutugno, F., Giurgiu, M., Meyer, B. T., Scharenborg, O., Van Dommelen, W., & Volin, J. (2008). The non-native consonant challenge for European languages. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1781-1784). ISCA Archive.

    Abstract

    This paper reports on a multilingual investigation into the effects of different masker types on native and non-native perception in a VCV consonant recognition task. Native listeners outperformed 7 other language groups, but all groups showed a similar ranking of maskers. Strong first language (L1) interference was observed, both from the sound system and from the L1 orthography. Universal acoustic-perceptual tendencies are also at work in both native and non-native sound identifications in noise. The effect of linguistic distance, however, was less clear: in large multilingual studies, listener variables may overpower other factors.
  • Giglio, L., Ostarek, M., Sharoh, D., & Hagoort, P. (2024). Diverging neural dynamics for syntactic structure building in naturalistic speaking and listening. PNAS, 121(11): e2310766121. doi:10.1073/pnas.2310766121.

    Abstract

    The neural correlates of sentence production have been mostly studied with constraining task paradigms that introduce artificial task effects. In this study, we aimed to gain a better understanding of syntactic processing in spontaneous production vs. naturalistic comprehension. We extracted word-by-word metrics of phrase-structure building with top-down and bottom-up parsers that make different hypotheses about the timing of structure building. In comprehension, structure building proceeded in an integratory fashion and led to an increase in activity in posterior temporal and inferior frontal areas. In production, structure building was anticipatory and predicted an increase in activity in the inferior frontal gyrus. Newly developed production-specific parsers highlighted the anticipatory and incremental nature of structure building in production, which was confirmed by a converging analysis of the pausing patterns in speech. Overall, the results showed that the unfolding of syntactic processing diverges between speaking and listening.
  • Gisselgard, J., Petersson, K. M., Baddeley, A., & Ingvar, M. (2003). The irrelevant speech effect: A PET study. Neuropsychologia, 41, 1899-1911. doi:10.1016/S0028-3932(03)00122-2.

    Abstract

    Positron emission tomography (PET) was performed in normal volunteers during a serial recall task under the influence of irrelevant speech comprising both single item repetition and multi-item sequences. An interaction approach was used to identify brain areas specifically related to the irrelevant speech effect. We interpreted activations as compensatory recruitment of complementary working memory processing, and decreased activity in terms of suppression of task relevant areas invoked by the irrelevant speech. The interaction between the distractors and working memory revealed a significant effect in the left, and to a lesser extent in the right, superior temporal region, indicating that initial phonological processing was relatively suppressed. Additional areas of decreased activity were observed in an a priori defined cortical network related to verbalworking memory, incorporating the bilateral superior temporal and inferior/middle frontal corticesn extending into Broca’s area on the left. We also observed a weak activation in the left inferior parietal cortex, a region suggested to reflect the phonological store, the subcomponent where the interference is assumed to take place. The results suggest that the irrelevant speech effect is correlated with and thus tentatively may be explained in terms of a suppression of components of the verbal working memory network as outlined. The results can be interpreted in terms of inhibitory top–down attentional mechanisms attenuating the influence of the irrelevant speech, although additional studies are clearly necessary to more fully characterize the nature of this phenomenon and its theoretical implications for existing short-term memory models
  • Goldin-Meadow, S., Chee So, W., Ozyurek, A., & Mylander, C. (2008). The natural order of events: how speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences of the USA, 105(27), 9163-9168. doi:10.1073/pnas.0710060105.

    Abstract

    To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor–patient–act, is analogous to the subject–object–verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.

    Additional information

    GoldinMeadow_2008_naturalSuppl.pdf
  • Goltermann*, O., Alagöz*, G., Molz, B., & Fisher, S. E. (2024). Neuroimaging genomics as a window into the evolution of human sulcal organization. Cerebral Cortex, 34(3): bhae078. doi:10.1093/cercor/bhae078.

    Abstract

    * Ole Goltermann and Gökberk Alagöz contributed equally.
    Primate brain evolution has involved prominent expansions of the cerebral cortex, with largest effects observed in the human lineage. Such expansions were accompanied by fine-grained anatomical alterations, including increased cortical folding. However, the molecular bases of evolutionary alterations in human sulcal organization are not yet well understood. Here, we integrated data from recently completed large-scale neuroimaging genetic analyses with annotations of the human genome relevant to various periods and events in our evolutionary history. These analyses identified single-nucleotide polymorphism (SNP) heritability enrichments in fetal brain human-gained enhancer (HGE) elements for a number of sulcal structures, including the central sulcus, which is implicated in human hand dexterity. We zeroed in on a genomic region that harbors DNA variants associated with left central sulcus shape, an HGE element, and genetic loci involved in neurogenesis including ZIC4, to illustrate the value of this approach for probing the complex factors contributing to human sulcal evolution.

    Additional information

    supplementary data link to preprint
  • González-Peñas, J., Alloza, C., Brouwer, R., Díaz-Caneja, C. M., Costas, J., González-Lois, N., Gallego, A. G., De Hoyos, L., Gurriarán, X., Andreu-Bernabeu, Á., Romero-García, R., Fañanas, L., Bobes, J., Pinto, A. G., Crespo-Facorro, B., Martorell, L., Arrojo, M., Vilella, E., Guitiérrez-Zotes, A., Perez-Rando, M. González-Peñas, J., Alloza, C., Brouwer, R., Díaz-Caneja, C. M., Costas, J., González-Lois, N., Gallego, A. G., De Hoyos, L., Gurriarán, X., Andreu-Bernabeu, Á., Romero-García, R., Fañanas, L., Bobes, J., Pinto, A. G., Crespo-Facorro, B., Martorell, L., Arrojo, M., Vilella, E., Guitiérrez-Zotes, A., Perez-Rando, M., Moltó, M. D., CIBERSAM group, Buimer, E., Van Haren, N., Cahn, W., O’Donovan, M., Kahn, R. S., Arango, C., Hulshoff Pol, H., Janssen, J., & Schnack, H. (2024). Accelerated cortical thinning in schizophrenia is associated with rare and common predisposing variation to schizophrenia and neurodevelopmental disorders. Biological Psychiatry. Advance online publication. doi:10.1016/j.biopsych.2024.03.011.

    Abstract

    Background

    Schizophrenia is a highly heritable disorder characterized by increased cortical thinning throughout the lifespan. Studies have reported a shared genetic basis between schizophrenia and cortical thickness. However, no genes whose expression is related to abnormal cortical thinning in schizophrenia have been identified.

    Methods

    We conducted linear mixed models to estimate the rates of accelerated cortical thinning across 68 regions from the Desikan-Killiany atlas in individuals with schizophrenia compared to healthy controls from a large longitudinal sample (NCases = 169 and NControls = 298, aged 16-70 years). We studied the correlation between gene expression data from the Allen Human Brain Atlas and accelerated thinning estimates across cortical regions. We finally explored the functional and genetic underpinnings of the genes most contributing to accelerated thinning.

    Results

    We described a global pattern of accelerated cortical thinning in individuals with schizophrenia compared to healthy controls. Genes underexpressed in cortical regions exhibiting this accelerated thinning were downregulated in several psychiatric disorders and were enriched for both common and rare disrupting variation for schizophrenia and neurodevelopmental disorders. In contrast, none of these enrichments were observed for baseline cross-sectional cortical thickness differences.

    Conclusions

    Our findings suggest that accelerated cortical thinning, rather than cortical thickness alone, serves as an informative phenotype for neurodevelopmental disruptions in schizophrenia. We highlight the genetic and transcriptomic correlates of this accelerated cortical thinning, emphasizing the need for future longitudinal studies to elucidate the role of genetic variation and the temporal-spatial dynamics of gene expression in brain development and aging in schizophrenia.

    Additional information

    supplementary materials
  • Goral, M., Antolovic, K., Hejazi, Z., & Schulz, F. M. (2024). Using a translanguaging framework to examine language production in a trilingual person with aphasia. Clinical Linguistics & Phonetics. Advance online publication. doi:10.1080/02699206.2024.2328240.

    Abstract

    When language abilities in aphasia are assessed in clinical and research settings, the standard practice is to examine each language of a multilingual person separately. But many multilingual individuals, with and without aphasia, mix their languages regularly when they communicate with other speakers who share their languages. We applied a novel approach to scoring language production of a multilingual person with aphasia. Our aim was to discover whether the assessment outcome would differ meaningfully when we count accurate responses in only the target language of the assessment session versus when we apply a translanguaging framework, that is, count all accurate responses, regardless of the language in which they were produced. The participant is a Farsi-German-English speaking woman with chronic moderate aphasia. We examined the participant’s performance on two picture-naming tasks, an answering wh-question task, and an elicited narrative task. The results demonstrated that scores in English, the participant’s third-learned and least-impaired language did not differ between the two scoring methods. Performance in German, the participant’s moderately impaired second language benefited from translanguaging-based scoring across the board. In Farsi, her weakest language post-CVA, the participant’s scores were higher under the translanguaging-based scoring approach in some but not all of the tasks. Our findings suggest that whether a translanguaging-based scoring makes a difference in the results obtained depends on relative language abilities and on pragmatic constraints, with additional influence of the linguistic distances between the languages in question.
  • Goudbeek, M., Cutler, A., & Smits, R. (2008). Supervised and unsupervised learning of multidimensionally varying nonnative speech categories. Speech Communication, 50(2), 109-125. doi:10.1016/j.specom.2007.07.003.

    Abstract

    The acquisition of novel phonetic categories is hypothesized to be affected by the distributional properties of the input, the relation of the new categories to the native phonology, and the availability of supervision (feedback). These factors were examined in four experiments in which listeners were presented with novel categories based on vowels of Dutch. Distribution was varied such that the categorization depended on the single dimension duration, the single dimension frequency, or both dimensions at once. Listeners were clearly sensitive to the distributional information, but unidimensional contrasts proved easier to learn than multidimensional. The native phonology was varied by comparing Spanish versus American English listeners. Spanish listeners found categorization by frequency easier than categorization by duration, but this was not true of American listeners, whose native vowel system makes more use of duration-based distinctions. Finally, feedback was either available or not; this comparison showed supervised learning to be significantly superior to unsupervised learning.
  • Gray, R., & Jordan, F. (2000). Language trees support the express-train sequence of Austronesian expansion. Nature, 405, 1052-1055. doi:10.1038/35016575.

    Abstract

    Languages, like molecules, document evolutionary history. Darwin(1) observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions(2-4), few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses - the "express-train''(5) and the "entangled-bank''(6,7) models - for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.
  • Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274-279. doi:10.1111/1467-9280.00255.

    Abstract

    To study the time course of sentence formulation, we monitored the eye movements of speakers as they described simple events. The similarity between speakers' initial eye movements and those of observers performing a nonverbal event-comprehension task suggested that response-relevant information was rapidly extracted from scenes, allowing speakers to select grammatical subjects based on comprehended events rather than salience. When speaking extemporaneously, speakers began fixating pictured elements less than a second before naming them within their descriptions, a finding consistent with incremental lexical encoding. Eye movements anticipated the order of mention despite changes in picture orientation, in who-did-what-to-whom, and in sentence structure. The results support Wundt's theory of sentence production.

    Files private

    Request files
  • Groszer, M., Keays, D. A., Deacon, R. M. J., De Bono, J. P., Prasad-Mulcare, S., Gaub, S., Baum, M. G., French, C. A., Nicod, J., Coventry, J. A., Enard, W., Fray, M., Brown, S. D. M., Nolan, P. M., Pääbo, S., Channon, K. M., Costa, R. M., Eilers, J., Ehret, G., Rawlins, J. N. P. and 1 moreGroszer, M., Keays, D. A., Deacon, R. M. J., De Bono, J. P., Prasad-Mulcare, S., Gaub, S., Baum, M. G., French, C. A., Nicod, J., Coventry, J. A., Enard, W., Fray, M., Brown, S. D. M., Nolan, P. M., Pääbo, S., Channon, K. M., Costa, R. M., Eilers, J., Ehret, G., Rawlins, J. N. P., & Fisher, S. E. (2008). Impaired synaptic plasticity and motor learning in mice with a point mutation implicated in human speech deficits. Current Biology, 18(5), 354-362. doi:10.1016/j.cub.2008.01.060.

    Abstract

    The most well-described example of an inherited speech and language disorder is that observed in the multigenerational KE family, caused by a heterozygous missense mutation in the FOXP2 gene. Affected individuals are characterized by deficits in the learning and production of complex orofacial motor sequences underlying fluent speech and display impaired linguistic processing for both spoken and written language. The FOXP2 transcription factor is highly similar in many vertebrate species, with conserved expression in neural circuits related to sensorimotor integration and motor learning. In this study, we generated mice carrying an identical point mutation to that of the KE family, yielding the equivalent arginine-to-histidine substitution in the Foxp2 DNA-binding domain. Homozygous R552H mice show severe reductions in cerebellar growth and postnatal weight gain but are able to produce complex innate ultrasonic vocalizations. Heterozygous R552H mice are overtly normal in brain structure and development. Crucially, although their baseline motor abilities appear to be identical to wild-type littermates, R552H heterozygotes display significant deficits in species-typical motor-skill learning, accompanied by abnormal synaptic plasticity in striatal and cerebellar neural circuits.

    Additional information

    mmc1.pdf
  • Le Guen, O. (2003). Quand les morts reviennent, réflexion sur l'ancestralité chez les Mayas des Basses Terres. Journal de la Société des Américanistes, 89(2), 171-205.

    Abstract

    When the dead come home… Remarks on ancestor worship among the Lowland Mayas. In Amerindian ethnographical literature, ancestor worship is often mentioned but evidence of its existence is lacking. This article will try to demonstrate that some Lowland Maya do worship ancestors ; it will use precise criteria taken from ethnological studies of societies where ancestor worship is common, compared to maya beliefs and practices. The All Souls’ Day, or hanal pixan, seems to be the most significant manifestation of this cult. Our approach will be comparative, through time – using colonial and ethnographical data of the twentieth century, and space – contemplating uses and beliefs of two maya groups, the Yucatec and the Lacandon Maya.
  • Le Guen, O. (2008). Ubèel pixan: El camino de las almas ancetros familiares y colectivos entre los Mayas Yacatecos. Penisula, 3(1), 83-120. Retrieved from http://www.revistas.unam.mx/index.php/peninsula/article/viewFile/44354/40086.

    Abstract

    The aim of this article is to analyze the funerary customs and ritual for the souls among contemporary Yucatec Maya in order to better understand their relations with pre-Hispanic burial patterns. It is suggested that the souls of the dead are considered as ancestors that can be distinguished between family and collective ancestors considering several criteria: the place of burial, the place of ritual performance and the ritual treatment. In this proposition, funerary practices as well as ritual categories of ancestors (family or collective), are considered as reminiscences of ancient practices whose traces can be found throughout historical sources. Through an analyze of the current funerary practices and their variations, this article aims to demonstrate that over the time and despite socio-economical changes, ancient funerary practices (specifically from the post-classic period) had kept some homogeneity, preserving some essential characteristics that can be observed in the actuality.
  • Gullberg, M., & Indefrey, P. (2008). Cognitive and neural prerequisites for time in language: Any answers? Language Learning, 58(suppl. 1), 207-216. doi:10.1111/j.1467-9922.2008.00472.x.
  • Gullberg, M., De Bot, K., & Volterra, V. (2008). Gestures and some key issues in the study of language development. Gesture, 8(2), 149-179. doi:10.1075/gest.8.2.03gul.

    Abstract

    The purpose of the current paper is to outline how gestures can contribute to the study of some key issues in language development. Specifically, we (1) briefly summarise what is already known about gesture in the domains of first and second language development, and development or changes over the life span more generally; (2) highlight theoretical and empirical issues in these domains where gestures can contribute in important ways to further our understanding; and (3) summarise some common themes in all strands of research on language development that could be the target of concentrated research efforts.
  • Gullberg, M., & De Bot, K. (Eds.). (2008). Gestures in language development [Special Issue]. Gesture, 8(2).
  • Gullberg, M., & McCafferty, S. G. (2008). Introduction to gesture and SLA: Toward an integrated approach. Studies in Second Language Acquisition, 30(2), 133-146. doi:10.1017/S0272263108080285.

    Abstract

    The title of this special issue, Gesture and SLA: Toward an Integrated Approach, stems in large part from the idea known as integrationism, principally set forth by Harris (2003, 2005), which posits that it is time to “demythologize” linguistics, moving away from the “orthodox exponents” that have idealized the notion of language. The integrationist approach intends a view that focuses on communication—that is, language in use, language as a “fact of life” (Harris, 2003, p. 50). Although not all gesture studies embrace an integrationist view—indeed, the field applies numerous theories across various disciplines—it is nonetheless true that to study gesture is to study what has traditionally been called paralinguistic modes of interaction, with the paralinguistic label given on the assumption that gesture is not part of the core meaning of what is rendered linguistically. However, arguably, most researchers within gesture studies would maintain just the opposite: The studies presented in this special issue reflect a view whereby gesture is regarded as a central aspect of language in use, integral to how we communicate (make meaning) both with each other and with ourselves.
  • Gullberg, M., Hendriks, H., & Hickmann, M. (2008). Learning to talk and gesture about motion in French. First Language, 28(2), 200-236. doi:10.1177/0142723707088074.

    Abstract

    This study explores how French adults and children aged four and six years talk and gesture about voluntary motion, examining (1) how they encode path and manner in speech, (2) how they encode this information in accompanying gestures; and (3) whether gestures are co-expressive with speech or express other information. When path and manner are equally relevant, children’s and adults’ speech and gestures both focus on path, rather than on manner. Moreover, gestures are predominantly co-expressive with speech at all ages. However, when they are non-redundant, adults tend to gesture about path while talking about manner, whereas children gesture about both path and manner while talking about path. The discussion highlights implications for our understanding of speakers’ representations and their development.
  • Gumperz, J. J., & Levinson, S. C. (1991). Rethinking linguistic relativity. Current Anthropology, 32(5), 613-623. Retrieved from http://www.jstor.org/stable/2743696.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94). Beijing: China Military Friendship Publish.

    Abstract

    Three groups of monolingual listeners, with Standard Chinese, Dutch and Hungarian as their native language, judged pairs of trisyllabic stimuli which differed only in their itch pattern. The segmental structure of the stimuli was made up by the experimenters and presented to subjects as being taken from a little-known language spoken on a South Pacific island. Pitch patterns consisted of a single rise-fall located on or near the second syllable. By and large, listeners selected the stimulus with the higher peak, the later eak, and the higher end rise as the one that signalled a question, regardless of language group. The result is argued to reflect innate, non-linguistic knowledge of the meaning of pitch variation, notably Ohala’s Frequency Code. A significant difference between groups is explained as due to the influence of the mother tongue.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94).
  • Guzmán Chacón, E., Ovando-Tellez, M., Thiebaut de Schotten, M., & Forkel, S. J. (2024). Embracing digital innovation in neuroscience: 2023 in review at NEUROCCINO. Brain Structure & Function, 229, 251-255. doi:10.1007/s00429-024-02768-6.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Syntax-related ERP-effects in Dutch. Cognitive Brain Research, 16(1), 38-50. doi:10.1016/S0926-6410(02)00208-2.

    Abstract

    In two studies subjects were required to read Dutch sentences that in some cases contained a syntactic violation, in other cases a semantic violation. All syntactic violations were word category violations. The design excluded differential contributions of expectancy to influence the syntactic violation effects. The syntactic violations elicited an Anterior Negativity between 300 and 500 ms. This negativity was bilateral and had a frontal distribution. Over posterior sites the same violations elicited a P600/SPS starting at about 600 ms. The semantic violations elicited an N400 effect. The topographic distribution of the AN was more frontal than the distribution of the classical N400 effect, indicating that the underlying generators of the AN and the N400 are, at least to a certain extent, non-overlapping. Experiment 2 partly replicated the design of Experiment 1, but with differences in rate of presentation and in the distribution of items over subjects, and without semantic violations. The word category violations resulted in the same effects as were observed in Experiment 1, showing that they were independent of some of the specific parameters of Experiment 1. The discussion presents a tentative account of the functional differences in the triggering conditions of the AN and the P600/SPS.
  • Hagoort, P. (2008). Should psychology ignore the language of the brain? Current Directions in Psychological Science, 17(2), 96-101. doi:10.1111/j.1467-8721.2008.00556.x.

    Abstract

    Claims that neuroscientific data do not contribute to our understanding of psychological functions have been made recently. Here I argue that these criticisms are solely based on an analysis of functional magnetic resonance imaging (fMRI) studies. However, fMRI is only one of the methods in the toolkit of cognitive neuroscience. I provide examples from research on event-related brain potentials (ERPs) that have contributed to our understanding of the cognitive architecture of human language functions. In addition, I provide evidence of (possible) contributions from fMRI measurements to our understanding of the functional architecture of language processing. Finally, I argue that a neurobiology of human language that integrates information about the necessary genetic and neural infrastructures will allow us to answer certain questions that are not answerable if all we have is evidence from behavior.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Real-time semantic compensation in patients with agrammatic comprehension: Electrophysiological evidence for multiple-route plasticity. Proceedings of the National Academy of Sciences of the United States of America, 100(7), 4340-4345. doi:10.1073/pnas.0230613100.

    Abstract

    To understand spoken language requires that the brain provides rapid access to different kinds of knowledge, including the sounds and meanings of words, and syntax. Syntax specifies constraints on combining words in a grammatically well formed manner. Agrammatic patients are deficient in their ability to use these constraints, due to a lesion in the perisylvian area of the languagedominant hemisphere. We report a study on real-time auditory sentence processing in agrammatic comprehenders, examining
    their ability to accommodate damage to the language system. We recorded event-related brain potentials (ERPs) in agrammatic comprehenders, nonagrammatic aphasics, and age-matched controls. When listening to sentences with grammatical violations, the agrammatic aphasics did not show the same syntax-related ERP effect as the two other subject groups. Instead, the waveforms of the agrammatic aphasics were dominated by a meaning-related ERP effect, presumably reflecting their attempts to achieve understanding by the use of semantic constraints. These data demonstrate that although agrammatic aphasics are impaired in their ability to exploit syntactic information in real time, they can reduce the consequences of a syntactic deficit by exploiting a semantic route. They thus provide evidence for the compensation of a syntactic deficit by a stronger reliance on another route in mapping
    sound onto meaning. This is a form of plasticity that we refer to as multiple-route plasticity.
  • Hagoort, P. (2008). The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 363, 1055-1069. doi:10.1098/rstb.2007.2159.

    Abstract

    This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding. Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no ‘magic moment’ when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Moreover, for one particular event-related brain potential (ERP) component (the N400), equivalent impact of sentence- and discourse-semantic contexts is observed. This indicates that in comprehension, a spoken word is immediately evaluated relative to the widest interpretive domain available. In addition, this happens very quickly. Findings are discussed that show that often an unfolding word can be mapped onto discourse-level representations well before the end of the word. Overall, the time course of the ERP effects is compatible with the view that the different information types (lexical, syntactic, phonological, pragmatic) are processed in parallel and influence the interpretation process incrementally, that is as soon as the relevant pieces of information are available. This is referred to as the immediacy principle.
  • Hagoort, P. (1997). De rappe prater als gewoontedier [Review of the book Smooth talkers: The linguistic performance of auctioneers and sportscasters, by Koenraad Kuiper]. Psychologie, 16, 22-23.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Li, X., Hagoort, P., & Yang, Y. (2008). Event-related potential evidence on the influence of accentuation in spoken discourse comprehension in Chinese. Journal of Cognitive Neuroscience, 20(5), 906-915. doi:10.1162/jocn.2008.20512.

    Abstract

    In an event-related potential experiment with Chinese discourses as material, we investigated how and when accentuation influences spoken discourse comprehension in relation to the different information states of the critical words. These words could either provide new or old information. It was shown that variation of accentuation influenced the amplitude of the N400, with a larger amplitude for accented than deaccented words. In addition, there was an interaction between accentuation and information state. The N400 amplitude difference between accented and deaccented new information was smaller than that between accented and deaccented old information. The results demonstrate that, during spoken discourse comprehension, listeners rapidly extract the semantic consequences of accentuation in relation to the previous discourse context. Moreover, our results show that the N400 amplitude can be larger for correct (new,accented words) than incorrect (new, deaccented words) information. This, we argue, proves that the N400 does not react to semantic anomaly per se, but rather to semantic integration load, which is higher for new information.
  • Hagoort, P. (2003). How the brain solves the binding problem for language: A neurocomputational model of syntactic processing. NeuroImage, 20(suppl. 1), S18-S29. doi:10.1016/j.neuroimage.2003.09.013.

    Abstract

    Syntax is one of the components in the architecture of language processing that allows the listener/reader to bind single-word information into a unified interpretation of multiword utterances. This paper discusses ERP effects that have been observed in relation to syntactic processing. The fact that these effects differ from the semantic N400 indicates that the brain honors the distinction between semantic and syntactic binding operations. Two models of syntactic processing attempt to account for syntax-related ERP effects. One type of model is serial, with a first phase that is purely syntactic in nature (syntax-first model). The other type of model is parallel and assumes that information immediately guides the interpretation process once it becomes available. This is referred to as the immediacy model. ERP evidence is presented in support of the latter model. Next, an explicit computational model is proposed to explain the ERP data. This Unification Model assumes that syntactic frames are stored in memory and retrieved on the basis of the spoken or written word form input. The syntactic frames associated with the individual lexical items are unified by a dynamic binding process into a structural representation that spans the whole utterance. On the basis of a meta-analysis of imaging studies on syntax, it is argued that the left posterior inferior frontal cortex is involved in binding syntactic frames together, whereas the left superior temporal cortex is involved in retrieval of the syntactic frames stored in memory. Lesion data that support the involvement of this left frontotemporal network in syntactic processing are discussed.
  • Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15(6), 883-899. doi:10.1162/089892903322370807.

    Abstract

    This study investigated the effects of combined semantic and syntactic violations in relation to the effects of single semantic and single syntactic violations on language-related event-related brain potential (ERP) effects (N400 and P600/ SPS). Syntactic violations consisted of a mismatch in grammatical gender or number features of the definite article and the noun in sentence-internal or sentence-final noun phrases (NPs). Semantic violations consisted of semantically implausible adjective–noun combinations in the same NPs. Combined syntactic and semantic violations were a summation of these two respective violation types. ERPs were recorded while subjects read the sentences with the different types of violations and the correct control sentences. ERP effects were computed relative to ERPs elicited by the sentence-internal or sentence-final nouns. The size of the N400 effect to the semantic violation was increased by an additional syntactic violation (the syntactic boost). In contrast, the size of the P600/ SPS to the syntactic violation was not affected by an additional semantic violation. This suggests that in the absence of syntactic ambiguity, the assignment of syntactic structure is independent of semantic context. However, semantic integration is influenced by syntactic processing. In the sentence-final position, additional global processing consequences were obtained as a result of earlier violations in the sentence. The resulting increase in the N400 amplitude to sentence-final words was independent of the nature of the violation. A speeded anomaly detection task revealed that it takes substantially longer to detect semantic than syntactic anomalies. These results are discussed in relation to the latency and processing characteristics of the N400 and P600/SPS effects. Overall, the results reveal an asymmetry in the interplay between syntax and semantics during on-line sentence comprehension.
  • Hagoort, P. (2008). Mijn omweg naar de filosofie. Algemeen Nederlands Tijdschrift voor Wijsbegeerte, 100(4), 303-310.
  • Hagoort, P. (1997). Semantic priming in Broca's aphasics at a short SOA: No support for an automatic access deficit. Brain and Language, 56, 287-300. doi:10.1006/brln.1997.1849.

    Abstract

    This study tests the recent claim that Broca’s aphasics are impaired in automatic lexical access, including the retrieval of word meaning. Subjects are required to perform a lexical decision on visually presented prime target pairs. Half of the word targets are preceded by a related word, half by an unrelated word. Primes and targets are presented with a long stimulus-onset-asynchrony (SOA) of 1400 msec and with a short SOA of 300 msec. Normal priming effects are observed in Broca’s aphasics for both SOAs. This result is discussed in the context of the claim that Broca’s aphasics suffer from an impairment in the automatic access of lexical–semantic information. It is argued that none of the current priming studies provides evidence supporting this claim, since with short SOAs priming effects have been reliably obtained in Broca’s aphasics. The results are more compatible with the claim that in many Broca’s aphasics the functional locus of their comprehension deficit is at the level of postlexical integration processes.
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Hagoort, P. (1997). Valt er nog te lachen zonder de rechter hersenhelft? Psychologie, 16, 52-55.
  • Hagoort, P., & Özyürek, A. (2024). Extending the architecture of language from a multimodal perspective. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12728.

    Abstract

    Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation.
  • Hanulikova, A. (2008). Word recognition in possible word contexts. In M. Kokkonidis (Ed.), Proceedings of LingO 2007 (pp. 92-99). Oxford: Faculty of Linguistics, Philology, and Phonetics, University of Oxford.

    Abstract

    The Possible-Word Constraint (PWC; Norris, McQueen, Cutler, and Butterfield 1997) suggests that segmentation of continuous speech operates with a universal constraint that feasible words should contain a vowel. Single consonants, because they do not constitute syllables, are treated as non-viable residues. Two word-spotting experiments are reported that investigate whether the PWC really is a language-universal principle. According to the PWC, Slovak listeners should, just like Germans, be slower at spotting words in single consonant contexts (not feasible words) as compared to syllable contexts (feasible words)—even if single consonants can be words in Slovak. The results confirm the PWC in German but not in Slovak.
  • Harbusch, K., & Kempen, G. (2000). Complexity of linear order computation in Performance Grammar, TAG and HPSG. In Proceedings of Fifth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+5) (pp. 101-106).

    Abstract

    This paper investigates the time and space complexity of word order computation in the psycholinguistically motivated grammar formalism of Performance Grammar (PG). In PG, the first stage of syntax assembly yields an unordered tree ('mobile') consisting of a hierarchy of lexical frames (lexically anchored elementary trees). Associated with each lexica l frame is a linearizer—a Finite-State Automaton that locally computes the left-to-right order of the branches of the frame. Linearization takes place after the promotion component may have raised certain constituents (e.g. Wh- or focused phrases) into the domain of lexical frames higher up in the syntactic mobile. We show that the worst-case time and space complexity of analyzing input strings of length n is O(n5) and O(n4), respectively. This result compares favorably with the time complexity of word-order computations in Tree Adjoining Grammar (TAG). A comparison with Head-Driven Phrase Structure Grammar (HPSG) reveals that PG yields a more declarative linearization method, provided that the FSA is rewritten as an equivalent regular expression.
  • Harbusch, K., Kempen, G., & Vosse, T. (2008). A natural-language paraphrase generator for on-line monitoring and commenting incremental sentence construction by L2 learners of German. In Proceedings of WorldCALL 2008.

    Abstract

    Certain categories of language learners need feedback on the grammatical structure of sentences they wish to produce. In contrast with the usual NLP approach to this problem—parsing student-generated texts—we propose a generation-based approach aiming at preventing errors (“scaffolding”). In our ICALL system, students construct sentences by composing syntactic trees out of lexically anchored “treelets” via a graphical drag&drop user interface. A natural-language generator computes all possible grammatically well-formed sentences entailed by the student-composed tree, and intervenes immediately when the latter tree does not belong to the set of well-formed alternatives. Feedback is based on comparisons between the student-composed tree and the well-formed set. Frequently occurring errors are handled in terms of “malrules.” The system (implemented in JAVA and C++) currently focuses constituent order in German as L2.
  • Haun, D. B. M. (2003). What's so special about spatial cognition. De Psychonoom, 18, 3-4.
  • Haun, D. B. M., & Call, J. (2008). Imitation recognition in great apes. Current Biology, 18(7), 288-290. doi:10.1016/j.cub.2008.02.031.

    Abstract

    Human infants imitate not only to acquire skill, but also as a fundamental part of social interaction [1] , [2] and [3] . They recognise when they are being imitated by showing increased visual attention to imitators (implicit recognition) and by engaging in so-called testing behaviours (explicit recognition). Implicit recognition affords the ability to recognize structural and temporal contingencies between actions across agents, whereas explicit recognition additionally affords the ability to understand the directional impact of one's own actions on others' actions [1] , [2] and [3] . Imitation recognition is thought to foster understanding of social causality, intentionality in others and the formation of a concept of self as different from other [3] , [4] and [5] . Pigtailed macaques (Macaca nemestrina) implicitly recognize being imitated [6], but unlike chimpanzees [7], they show no sign of explicit imitation recognition. We investigated imitation recognition in 11 individuals from the four species of non-human great apes. We replicated results previously found with a chimpanzee [7] and, critically, have extended them to the other great ape species. Our results show a general prevalence of imitation recognition in all great apes and thereby demonstrate important differences between great apes and monkeys in their understanding of contingent social interactions.
  • Hayano, K. (2008). Talk and body: Negotiating action framework and social relationship in conversation. Studies in English and American Literature, 43, 187-198.
  • Hayano, K. (2003). Self-presentation as a face-threatening act: A comparative study of self-oriented topic introduction in English and Japanese. Veritas, 24, 45-58.
  • Heeschen, C., Ryalls, J., & Hagoort, P. (1988). Psychological stress in Broca's versus Wernicke's aphasia. Clinical Linguistics & Phonetics, 2, 309-316. doi:10.3109/02699208808985262.

    Abstract

    We advance the hypothesis here that the higher-than-average vocal pitch (FO) found for speech of Broca's aphasics in experimental settings is due, in part, to increased psychological stress. Two experiments were conducted which manipulated conversational constraints and the sentence forms to be produced by aphasic patients. Our study revealed significant differences between changes in vocal pitch of agrammatic Broca's aphasics versus those of Wernicke's aphasics and normal controls. It is suggested that the greater psychological stress experienced by the Broca's aphasics, but not by the Wernicke's aphasics, accounts for these observed differences.
  • Hegemann, L., Corfield, E. C., Askelund, A. D., Allegrini, A. G., Askeland, R. B., Ronald, A., Ask, H., St Pourcain, B., Andreassen, O. A., Hannigan, L. J., & Havdahl, A. (2024). Genetic and phenotypic heterogeneity in early neurodevelopmental traits in the Norwegian Mother, Father and Child Cohort Study. Molecular Autism, 15: 25. doi:10.1186/s13229-024-00599-0.

    Abstract

    Background
    Autism and different neurodevelopmental conditions frequently co-occur, as do their symptoms at sub-diagnostic threshold levels. Overlapping traits and shared genetic liability are potential explanations.

    Methods
    In the population-based Norwegian Mother, Father, and Child Cohort study (MoBa), we leverage item-level data to explore the phenotypic factor structure and genetic architecture underlying neurodevelopmental traits at age 3 years (N = 41,708–58,630) using maternal reports on 76 items assessing children’s motor and language development, social functioning, communication, attention, activity regulation, and flexibility of behaviors and interests.

    Results
    We identified 11 latent factors at the phenotypic level. These factors showed associations with diagnoses of autism and other neurodevelopmental conditions. Most shared genetic liabilities with autism, ADHD, and/or schizophrenia. Item-level GWAS revealed trait-specific genetic correlations with autism (items rg range = − 0.27–0.78), ADHD (items rg range = − 0.40–1), and schizophrenia (items rg range = − 0.24–0.34). We find little evidence of common genetic liability across all neurodevelopmental traits but more so for several genetic factors across more specific areas of neurodevelopment, particularly social and communication traits. Some of these factors, such as one capturing prosocial behavior, overlap with factors found in the phenotypic analyses. Other areas, such as motor development, seemed to have more heterogenous etiology, with specific traits showing a less consistent pattern of genetic correlations with each other.

    Conclusions
    These exploratory findings emphasize the etiological complexity of neurodevelopmental traits at this early age. In particular, diverse associations with neurodevelopmental conditions and genetic heterogeneity could inform follow-up work to identify shared and differentiating factors in the early manifestations of neurodevelopmental traits and their relation to autism and other neurodevelopmental conditions. This in turn could have implications for clinical screening tools and programs.
  • Heim, F., Scharff, C., Fisher, S. E., Riebel, K., & Ten Cate, C. (2024). Auditory discrimination learning and acoustic cue weighing in female zebra finches with localized FoxP1 knockdowns. Journal of Neurophysiology, 131, 950-963. doi:10.1152/jn.00228.2023.

    Abstract

    Rare disruptions of the transcription factor FOXP1 are implicated in a human neurodevelopmental disorder characterized by autism and/or intellectual disability with prominent problems in speech and language abilities. Avian orthologues of this transcription factor are evolutionarily conserved and highly expressed in specific regions of songbird brains, including areas associated with vocal production learning and auditory perception. Here, we investigated possible contributions of FoxP1 to song discrimination and auditory perception in juvenile and adult female zebra finches. They received lentiviral knockdowns of FoxP1 in one of two brain areas involved in auditory stimulus processing, HVC (proper name) or CMM (caudomedial mesopallium). Ninety-six females, distributed over different experimental and control groups were trained to discriminate between two stimulus songs in an operant Go/Nogo paradigm and subsequently tested with an array of stimuli. This made it possible to assess how well they recognized and categorized altered versions of training stimuli and whether localized FoxP1 knockdowns affected the role of different features during discrimination and categorization of song. Although FoxP1 expression was significantly reduced by the knockdowns, neither discrimination of the stimulus songs nor categorization of songs modified in pitch, sequential order of syllables or by reversed playback were affected. Subsequently, we analyzed the full dataset to assess the impact of the different stimulus manipulations for cue weighing in song discrimination. Our findings show that zebra finches rely on multiple parameters for song discrimination, but with relatively more prominent roles for spectral parameters and syllable sequencing as cues for song discrimination.

    NEW & NOTEWORTHY In humans, mutations of the transcription factor FoxP1 are implicated in speech and language problems. In songbirds, FoxP1 has been linked to male song learning and female preference strength. We found that FoxP1 knockdowns in female HVC and caudomedial mesopallium (CMM) did not alter song discrimination or categorization based on spectral and temporal information. However, this large dataset allowed to validate different cue weights for spectral over temporal information for song recognition.
  • Henderson, L., Coltheart, M., Cutler, A., & Vincent, N. (1988). Preface. Linguistics, 26(4), 519-520. doi:10.1515/ling.1988.26.4.519.
  • Hersh, T. A., Ravignani, A., & Whitehead, H. (2024). Cetaceans are the next frontier for vocal rhythm research. PNAS, 121(25): e2313093121. doi:10.1073/pnas.2313093121.

    Abstract

    While rhythm can facilitate and enhance many aspects of behavior, its evolutionary trajectory in vocal communication systems remains enigmatic. We can trace evolutionary processes by investigating rhythmic abilities in different species, but research to date has largely focused on songbirds and primates. We present evidence that cetaceans—whales, dolphins, and porpoises—are a missing piece of the puzzle for understanding why rhythm evolved in vocal communication systems. Cetaceans not only produce rhythmic vocalizations but also exhibit behaviors known or thought to play a role in the evolution of different features of rhythm. These behaviors include vocal learning abilities, advanced breathing control, sexually selected vocal displays, prolonged mother–infant bonds, and behavioral synchronization. The untapped comparative potential of cetaceans is further enhanced by high interspecific diversity, which generates natural ranges of vocal and social complexity for investigating various evolutionary hypotheses. We show that rhythm (particularly isochronous rhythm, when sounds are equally spaced in time) is prevalent in cetacean vocalizations but is used in different contexts by baleen and toothed whales. We also highlight key questions and research areas that will enhance understanding of vocal rhythms across taxa. By coupling an infraorder-level taxonomic assessment of vocal rhythm production with comparisons to other species, we illustrate how broadly comparative research can contribute to a more nuanced understanding of the prevalence, evolution, and possible functions of rhythm in animal communication.

    Additional information

    supporting information
  • Hervais-Adelman, A., Davis, M. H., Johnsrude, I. S., & Carlyon, R. P. (2008). Perceptual learning of noise vocoded words: Effects of feedback and lexicality. Journal of Experimental Psychology: Human Perception and Performance, 34(2), 460-474. doi:10.1037/0096-1523.34.2.460.

    Abstract

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word report showed a significantly greater improvement over trials for listeners that heard clear speech presentations before rather than after hearing distorted speech (clear-then-distorted compared with distorted-then-clear feedback, in Experiment 1). This perceptual learning generalized to untrained words suggesting a sublexical locus for learning and was equivalent for word and nonword training stimuli (Experiment 2). These findings point to the crucial involvement of phonological short-term memory and top-down processes in the perceptual learning of noise-vocoded speech. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.
  • Hintz, F., McQueen, J. M., & Meyer, A. S. (2024). Using psychometric network analysis to examine the components of spoken word recognition. Journal of Cognition, 7(1): 10. doi:10.5334/joc.340.

    Abstract

    Using language requires access to domain-specific linguistic representations, but also draws on domain-general cognitive skills. A key issue in current psycholinguistics is to situate linguistic processing in the network of human cognitive abilities. Here, we focused on spoken word recognition and used an individual differences approach to examine the links of scores in word recognition tasks with scores on tasks capturing effects of linguistic experience, general processing speed, working memory, and non-verbal reasoning. 281 young native speakers of Dutch completed an extensive test battery assessing these cognitive skills. We used psychometric network analysis to map out the direct links between the scores, that is, the unique variance between pairs of scores, controlling for variance shared with the other scores. The analysis revealed direct links between word recognition skills and processing speed. We discuss the implications of these results and the potential of psychometric network analysis for studying language processing and its embedding in the broader cognitive system.

    Additional information

    network analysis of dataset A and B
  • Hintz, F., & Meyer, A. S. (Eds.). (2024). Individual differences in language skills [Special Issue]. Journal of Cognition, 7(1).
  • Hintz, F., Shkaravska, O., Dijkhuis, M., Van 't Hoff, V., Huijsmans, M., Van Dongen, R. C., Voeteé, L. A., Trilsbeek, P., McQueen, J. M., & Meyer, A. S. (2024). IDLaS-NL – A platform for running customized studies on individual differences in Dutch language skills via the internet. Behavior Research Methods, 56(3), 2422-2436. doi:10.3758/s13428-023-02156-8.

    Abstract

    We introduce the Individual Differences in Language Skills (IDLaS-NL) web platform, which enables users to run studies on individual differences in Dutch language skills via the internet. IDLaS-NL consists of 35 behavioral tests, previously validated in participants aged between 18 and 30 years. The platform provides an intuitive graphical interface for users to select the tests they wish to include in their research, to divide these tests into different sessions and to determine their order. Moreover, for standardized administration the platform
    provides an application (an emulated browser) wherein the tests are run. Results can be retrieved by mouse click in the graphical interface and are provided as CSV-file output via email. Similarly, the graphical interface enables researchers to modify and delete their study configurations. IDLaS-NL is intended for researchers, clinicians, educators and in general anyone conducting fundaental research into language and general cognitive skills; it is not intended for diagnostic purposes. All platform services are free of charge. Here, we provide a
    description of its workings as well as instructions for using the platform. The IDLaS-NL platform can be accessed at www.mpi.nl/idlas-nl.
  • Holler, J., & Beattie, G. (2003). How iconic gestures and speech interact in the representation of meaning: are both aspects really integral to the process? Semiotica, 146, 81-116.
  • Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3, 127-154.
  • Hope, T. M. H., Neville, D., Talozzi, L., Foulon, C., Forkel, S. J., Thiebaut de Schotten, M., & Price, C. J. (2024). Testing the disconnectome symptom discoverer model on out-of-sample post-stroke language outcomes. Brain, 147(2), e11-e13. doi:10.1093/brain/awad352.

    Abstract

    Stroke is common, and its consequent brain damage can cause various cognitive impairments. Associations between where and how much brain lesion damage a patient has suffered, and the particular impairments that injury has caused (lesion-symptom associations) offer potentially compelling insights into how the brain implements cognition.1 A better understanding of those associations can also fill a gap in current stroke medicine by helping us to predict how individual patients might recover from post-stroke impairments.2 Most recent work in this area employs machine learning models trained with data from stroke patients whose mid-to-long-term outcomes are known.2-4 These machine learning models are tested by predicting new outcomes—typically scores on standardized tests of post-stroke impairment—for patients whose data were not used to train the model. Traditionally, these validation results have been shared in peer-reviewed publications describing the model and its training. But recently, and for the first time in this field (as far as we know), one of these pre-trained models has been made public—The Disconnectome Symptom Discoverer model (DSD) which draws its predictors from structural disconnection information inferred from stroke patients’ brain MRI.5

    Here, we test the DSD model on wholly independent data, never seen by the model authors, before they published it. Specifically, we test whether its predictive performance is just as accurate as (i.e. not significantly worse than) that reported in the original (Washington University) dataset, when predicting new patients’ outcomes at a similar time post-stroke (∼1 year post-stroke) and also in another independent sample tested later (5+ years) post-stroke. A failure to generalize the DSD model occurs if it performs significantly better in the Washington data than in our data from patients tested at a similar time point (∼1 year post-stroke). In addition, a significant decrease in predictive performance for the more chronic sample would be evidence that lesion-symptom associations differ at ∼1 year post-stroke and >5 years post-stroke.
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • De Hoyos, L., Barendse, M. T., Schlag, F., Van Donkelaar, M. M. J., Verhoef, E., Shapland, C. Y., Klassmann, A., Buitelaar, J., Verhulst, B., Fisher, S. E., Rai, D., & St Pourcain, B. (2024). Structural models of genome-wide covariance identify multiple common dimensions in autism. Nature Communications, 15: 1770. doi:10.1038/s41467-024-46128-8.

    Abstract

    Common genetic variation has been associated with multiple symptoms in Autism Spectrum Disorder (ASD). However, our knowledge of shared genetic factor structures contributing to this highly heterogeneous neurodevelopmental condition is limited. Here, we developed a structural equation modelling framework to directly model genome-wide covariance across core and non-core ASD phenotypes, studying autistic individuals of European descent using a case-only design. We identified three independent genetic factors most strongly linked to language/cognition, behaviour and motor development, respectively, when studying a population-representative sample (N=5,331). These analyses revealed novel associations. For example, developmental delay in acquiring personal-social skills was inversely related to language, while developmental motor delay was linked to self-injurious behaviour. We largely confirmed the three-factorial structure in independent ASD-simplex families (N=1,946), but uncovered simplex-specific genetic overlap between behaviour and language phenotypes. Thus, the common genetic architecture in ASD is multi-dimensional and contributes, in combination with ascertainment-specific patterns, to phenotypic heterogeneity.
  • Huettig, F., & Hartsuiker, R. J. (2008). When you name the pizza you look at the coin and the bread: Eye movements reveal semantic activation during word production. Memory & Cognition, 36(2), 341-360. doi:10.3758/MC.36.2.341.

    Abstract

    Two eyetracking experiments tested for activation of category coordinate and perceptually related concepts when speakers prepare the name of an object. Speakers saw four visual objects in a 2 × 2 array and identified and named a target picture on the basis of either category (e.g., "What is the name of the musical instrument?") or visual-form (e.g., "What is the name of the circular object?") instructions. There were more fixations on visual-form competitors and category coordinate competitors than on unrelated objects during name preparation, but the increased overt attention did not affect naming latencies. The data demonstrate that eye movements are a sensitive measure of the overlap between the conceptual (including visual-form) information that is accessed in preparation for word production and the conceptual knowledge associated with visual objects. Furthermore, these results suggest that semantic activation of competitor concepts does not necessarily affect lexical selection, contrary to the predictions of lexical-selection-by-competition accounts (e.g., Levelt, Roelofs, & Meyer, 1999).
  • Huettig, F., & Hulstijn, J. (2024). The Enhanced Literate Mind Hypothesis. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12731.

    Abstract

    In the present paper we describe the Enhanced Literate Mind (ELM) hypothesis. As individuals learn to read and write, they are, from then on, exposed to extensive written-language input and become literate. We propose that acquisition and proficient processing of written language (‘literacy’) leads to, both, increased language knowledge as well as enhanced language and non-language (perceptual and cognitive) skills. We also suggest that all neurotypical native language users, including illiterate, low literate, and high literate individuals, share a Basic Language Cognition (BLC) in the domain of oral informal language. Finally, we discuss the possibility that the acquisition of ELM leads to some degree of ‘knowledge parallelism’ between BLC and ELM in literate language users, which has implications for empirical research on individual and situational differences in spoken language processing.
  • Hunley, K., Dunn, M., Lindström, E., Reesink, G., Terrill, A., Healy, M. E., Koki, G., Friedlaender, F. R., & Friedlaender, J. S. (2008). Genetic and linguistic coevolution in Northern Island Melanesia. PLoS Genetics, 4(10): e1000239. doi:10.1371/journal.pgen.1000239.

    Abstract

    Recent studies have detailed a remarkable degree of genetic and linguistic diversity in Northern Island Melanesia. Here we utilize that diversity to examine two models of genetic and linguistic coevolution. The first model predicts that genetic and linguistic correspondences formed following population splits and isolation at the time of early range expansions into the region. The second is analogous to the genetic model of isolation by distance, and it predicts that genetic and linguistic correspondences formed through continuing genetic and linguistic exchange between neighboring populations. We tested the predictions of the two models by comparing observed and simulated patterns of genetic variation, genetic and linguistic trees, and matrices of genetic, linguistic, and geographic distances. The data consist of 751 autosomal microsatellites and 108 structural linguistic features collected from 33 Northern Island Melanesian populations. The results of the tests indicate that linguistic and genetic exchange have erased any evidence of a splitting and isolation process that might have occurred early in the settlement history of the region. The correlation patterns are also inconsistent with the predictions of the isolation by distance coevolutionary process in the larger Northern Island Melanesian region, but there is strong evidence for the process in the rugged interior of the largest island in the region (New Britain). There we found some of the strongest recorded correlations between genetic, linguistic, and geographic distances. We also found that, throughout the region, linguistic features have generally been less likely to diffuse across population boundaries than genes. The results from our study, based on exceptionally fine-grained data, show that local genetic and linguistic exchange are likely to obscure evidence of the early history of a region, and that language barriers do not particularly hinder genetic exchange. In contrast, global patterns may emphasize more ancient demographic events, including population splits associated with the early colonization of major world regions.
  • Indefrey, P., & Gullberg, M. (Eds.). (2008). Time to speak: Cognitive and neural prerequisites for time in language [Special Issue]. Language Learning, 58(suppl. 1).

    Abstract

    Time is a fundamental aspect of human cognition and action. All languages have developed rich means to express various facets of time, such as bare time spans, their position on the time line, or their duration. The articles in this volume give an overview of what we know about the neural and cognitive representations of time that speakers can draw on in language. Starting with an overview of the main devices used to encode time in natural language, such as lexical elements, tense and aspect, the research presented in this volume addresses the relationship between temporal language, culture, and thought, the relationship between verb aspect and mental simulations of events, the development of temporal concepts, time perception, the storage and retrieval of temporal information in autobiographical memory, and neural correlates of tense processing and sequence planning. The psychological and neurobiological findings presented here will provide important insights to inform and extend current studies of time in language and in language acquisition.
  • Indefrey, P., Kleinschmidt, A., Merboldt, K.-D., Krüger, G., Brown, C. M., Hagoort, P., & Frahm, J. (1997). Equivalent responses to lexical and nonlexical visual stimuli in occipital cortex: a functional magnetic resonance imaging study. Neuroimage, 5, 78-81. doi:10.1006/nimg.1996.0232.

    Abstract

    Stimulus-related changes in cerebral blood oxygenation were measured using high-resolution functional magnetic resonance imaging sequentially covering visual occipital areas in contiguous sections. During dynamic imaging, healthy subjects silently viewed pseudowords, single false fonts, or length-matched strings of the same false fonts. The paradigm consisted of a sixfold alternation of an activation and a control task. With pseudowords as activation vs single false fonts as control, responses were seen mainly in medial occipital cortex. These responses disappeared when pseudowords were alternated with false font strings as the control and reappeared when false font strings instead of pseudowords served as activation and were alternated with single false fonts. The string-length contrast alone, therefore, is sufficient to account for the activation pattern observed in medial visual cortex when word-like stimuli are contrasted with single characters.
  • Isaac, A., Matthezing, H., Van der Meij, L., Schlobach, S., Wang, S., & Zinn, C. (2008). Putting ontology alignment in context: Usage, scenarios, deployment and evaluation in a library case. In S. Bechhofer, M. Hauswirth, J. Hoffmann, & M. Koubarakis (Eds.), The semantic web: Research and applications (pp. 402-417). Berlin: Springer.

    Abstract

    Thesaurus alignment plays an important role in realising efficient access to heterogeneous Cultural Heritage data. Current ontology alignment techniques, however, provide only limited value for such access as they consider little if any requirements from realistic use cases or application scenarios. In this paper, we focus on two real-world scenarios in a library context: thesaurus merging and book re-indexing. We identify their particular requirements and describe our approach of deploying and evaluating thesaurus alignment techniques in this context. We have applied our approach for the Ontology Alignment Evaluation Initiative, and report on the performance evaluation of participants’ tools wrt. the application scenario at hand. It shows that evaluations of tools requires significant effort, but when done carefully, brings many benefits.
  • Isaac, A., Schlobach, S., Matthezing, H., & Zinn, C. (2008). Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies. Library Review, 57(3), 187-199.
  • Jadoul, Y., De Boer, B., & Ravignani, A. (2024). Parselmouth for bioacoustics: Automated acoustic analysis in Python. Bioacoustics, 33(1), 1-19. doi:10.1080/09524622.2023.2259327.

    Abstract

    Bioacoustics increasingly relies on large datasets and computational methods. The need to batch-process large amounts of data and the increased focus on algorithmic processing require software tools. To optimally assist in a bioacoustician’s workflow, software tools need to be as simple and effective as possible. Five years ago, the Python package Parselmouth was released to provide easy and intuitive access to all functionality in the Praat software. Whereas Praat is principally designed for phonetics and speech processing, plenty of bioacoustics studies have used its advanced acoustic algorithms. Here, we evaluate existing usage of Parselmouth and discuss in detail several studies which used the software library. We argue that Parselmouth has the potential to be used even more in bioacoustics research, and suggest future directions to be pursued with the help of Parselmouth.
  • Janse, E., Sennema, A., & Slis, A. (2000). Fast speech timing in Dutch: The durational correlates of lexical stress and pitch accent. In Proceedings of the VIth International Conference on Spoken Language Processing, Vol. III (pp. 251-254).

    Abstract

    n this study we investigated the durational correlates of lexical stress and pitch accent at normal and fast speech rate in Dutch. Previous literature on English shows that durations of lexically unstressed vowels are reduced more than stressed vowels when speakers increase their speech rate. We found that the same holds for Dutch, irrespective of whether the unstressed vowel is schwa or a "full" vowel. In the same line, we expected that vowels in words without a pitch accent would be shortened relatively more than vowels in words with a pitch accent. This was not the case: if anything, the accented vowels were shortened relatively more than the unaccented vowels. We conclude that duration is an important cue for lexical stress, but not for pitch accent.
  • Janse, E. (2000). Intelligibility of time-compressed speech: Three ways of time-compression. In Proceedings of the VIth International Conference on Spoken Language Processing, vol. III (pp. 786-789).

    Abstract

    Studies on fast speech have shown that word-level timing of fast speech differs from that of normal rate speech in that unstressed syllables are shortened more than stressed syllables as speech rate increases. An earlier experiment showed that the intelligibility of time-compressed speech could not be improved by making its temporal organisation closer to natural fast speech. To test the hypothesis that segmental intelligibility is more important than prosodic timing in listening to timecompressed speech, the intelligibility of bisyllabic words was tested in three time-compression conditions: either stressed and unstressed syllable were compressed to the same degree, or the stressed syllable was compressed more than the unstressed syllable, or the reverse. As was found before, imitating wordlevel timing of fast speech did not improve intelligibility over linear compression. However, the results did not confirm the hypothesis either: there was no difference in intelligibility between the three compression conditions. We conclude that segmental intelligibility plays an important role, but further research is necessary to decide between the contributions of prosody and segmental intelligibility to the word-level intelligibility of time-compressed speech.
  • Janse, E. (2008). Spoken-word processing in aphasia: Effects of item overlap and item repetition. Brain and Language, 105, 185-198. doi:10.1016/j.bandl.2007.10.002.

    Abstract

    Two studies were carried out to investigate the effects of presentation of primes showing partial (word-initial) or full overlap on processing of spoken target words. The first study investigated whether time compression would interfere with lexical processing so as to elicit aphasic-like performance in non-brain-damaged subjects. The second study was designed to compare effects of item overlap and item repetition in aphasic patients of different diagnostic types. Time compression did not interfere with lexical deactivation for the non-brain-damaged subjects. Furthermore, all aphasic patients showed immediate inhibition of co-activated candidates. These combined results show that deactivation is a fast process. Repetition effects, however, seem to arise only at the longer term in aphasic patients. Importantly, poor performance on diagnostic verbal STM tasks was shown to be related to lexical decision performance in both overlap and repetition conditions, which suggests a common underlying deficit.
  • Janse, E. (2003). Word perception in natural-fast and artificially time-compressed speech. In M. SolÉ, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of the Phonetic Sciences (pp. 3001-3004).
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Janzen, G., Jansen, C., & Van Turennout, M. (2008). Memory consolidation of landmarks in good navigators. Hippocampus, 18, 40-47.

    Abstract

    Landmarks play an important role in successful navigation. To successfully find your way around an environment, navigationally relevant information needs to be stored and become available at later moments in time. Evidence from functional magnetic resonance imaging (fMRI) studies shows that the human parahippocampal gyrus encodes the navigational relevance of landmarks. In the present event-related fMRI experiment, we investigated memory consolidation of navigationally relevant landmarks in the medial temporal lobe after route learning. Sixteen right-handed volunteers viewed two film sequences through a virtual museum with objects placed at locations relevant (decision points) or irrelevant (nondecision points) for navigation. To investigate consolidation effects, one film sequence was seen in the evening before scanning, the other one was seen the following morning, directly before scanning. Event-related fMRI data were acquired during an object recognition task. Participants decided whether they had seen the objects in the previously shown films. After scanning, participants answered standardized questions about their navigational skills, and were divided into groups of good and bad navigators, based on their scores. An effect of memory consolidation was obtained in the hippocampus: Objects that were seen the evening before scanning (remote objects) elicited more activity than objects seen directly before scanning (recent objects). This increase in activity in bilateral hippocampus for remote objects was observed in good navigators only. In addition, a spatial-specific effect of memory consolidation for navigationally relevant objects was observed in the parahippocampal gyrus. Remote decision point objects induced increased activity as compared with recent decision point objects, again in good navigators only. The results provide initial evidence for a connection between memory consolidation and navigational ability that can provide a basis for successful navigation.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jesse, A., & Johnson, E. K. (2008). Audiovisual alignment in child-directed speech facilitates word learning. In Proceedings of the International Conference on Auditory-Visual Speech Processing (pp. 101-106). Adelaide, Aust: Causal Productions.

    Abstract

    Adult-to-child interactions are often characterized by prosodically-exaggerated speech accompanied by visually captivating co-speech gestures. In a series of adult studies, we have shown that these gestures are linked in a sophisticated manner to the prosodic structure of adults' utterances. In the current study, we use the Preferential Looking Paradigm to demonstrate that two-year-olds can use the alignment of these gestures to speech to deduce the meaning of words.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Johnson, E. K. (2003). Speaker intent influences infants' segmentation of potentially ambiguous utterances. In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 1995-1998). Adelaide: Causal Productions.
  • Johnson, E. K., & Seidl, A. (2008). Clause segmentation by 6-month-olds: A crosslingusitic perspective. Infancy, 13, 440-455. doi:10.1080/15250000802329321.

    Abstract

    Each clause and phrase boundary necessarily aligns with a word boundary. Thus, infants’ attention to the edges of clauses and phrases may help them learn some of the language-specific cues defining word boundaries. Attention to prosodically wellformed clauses and phrases may also help infants begin to extract information important for learning the grammatical structure of their language. Despite the potentially important role that the perception of large prosodic units may play in early language acquisition, there has been little work investigating the extraction of these units from fluent speech by infants learning languages other than English. We report 2 experiments investigating Dutch learners’ clause segmentation abilities.In these studies, Dutch-learning 6-month-olds readily extract clauses from speech. However, Dutch learners differ from English learners in that they seem to be more reliant on pauses to detect clause boundaries. Two closely related explanations for this finding are considered, both of which stem from the acoustic differences in clause boundary realizations in Dutch versus English.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2000). The development of word recognition: The use of the possible-word constraint by 12-month-olds. In L. Gleitman, & A. Joshi (Eds.), Proceedings of CogSci 2000 (pp. 1034). London: Erlbaum.
  • Jordens, P. (1997). Introducing the basic variety. Second Language Research, 13(4), 289-300. doi:10.1191%2F026765897672176425.
  • Kakimoto, N., Wongratwanich, P., Shimamoto, H., Kitisubkanchana, J., Tsujimoto, T., Shimabukuro, K., Verdonschot, R. G., Hasegawa, Y., & Murakami, S. (2024). Comparison of T2 values of the displaced unilateral disc and retrodiscal tissue of temporomandibular joints and their implications. Scientific Reports, 14: 1705. doi:10.1038/s41598-024-52092-6.

    Abstract

    Unilateral anterior disc displacement (uADD) has been shown to affect the contralateral joints qualitatively. This study aims to assess the quantitative T2 values of the articular disc and retrodiscal tissue of patients with uADD at 1.5 Tesla (T). The study included 65 uADD patients and 17 volunteers. The regions of interest on T2 maps were evaluated. The affected joints demonstrated significantly higher articular disc T2 values (31.5 ± 3.8 ms) than those of the unaffected joints (28.9 ± 4.5 ms) (P < 0.001). For retrodiscal tissue, T2 values of the unaffected (37.8 ± 5.8 ms) and affected joints (41.6 ± 7.1 ms) were significantly longer than those of normal volunteers (34.4 ± 3.2 ms) (P < 0.001). Furthermore, uADD without reduction (WOR) joints (43.3 ± 6.8 ms) showed statistically higher T2 values than the unaffected joints of both uADD with reduction (WR) (33.9 ± 3.8 ms) and uADDWOR (38.9 ± 5.8 ms), and the affected joints of uADDWR (35.8 ± 4.4 ms). The mean T2 value of the unaffected joints of uADDWOR was significantly longer than that of healthy volunteers (P < 0.001). These results provided quantitative evidence for the influence of the affected joints on the contralateral joints.
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2024). Morphosyntactic predictive processing in adult heritage speakers: Effects of cue availability and spoken and written language experience. Language, Cognition and Neuroscience, 39(1), 118-135. doi:10.1080/23273798.2023.2254424.

    Abstract

    We investigated prediction skills of adult heritage speakers and the role of written and spoken language experience on predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in verb-medial and verb-final sentences in Turkish with adult Turkish heritage speakers (N = 25) and Turkish monolingual speakers (N = 24). Heritage speakers predicted in verb-medial sentences (when verb-semantic and case-marking cues were available), but not in verb-final sentences (when only case-marking cues were available) while monolinguals predicted in both. Prediction skills of heritage speakers were modulated by their spoken language experience in Turkish and written language experience in both languages. Overall, these results strongly suggest that verb-semantic information is needed to scaffold the use of morphosyntactic cues for prediction in heritage speakers. The findings also support the notion that both spoken and written language experience play an important role in predictive spoken language processing.
  • Karadöller, D. Z., Peeters, D., Manhardt, F., Özyürek, A., & Ortega, G. (2024). Iconicity and gesture jointly facilitate learning of second language signs at first exposure in hearing non-signers. Language Learning. Advance online publication. doi:10.1111/lang.12636.

    Abstract

    When learning a spoken second language (L2), words overlapping in form and meaning with one’s native language (L1) help break into the new language. When non-signing speakers learn a sign language as L2, such forms are absent because of the modality differences (L1:speech, L2:sign). In such cases, non-signing speakers might use iconic form-meaning mappings in signs or their own gestural experience as gateways into the to-be-acquired sign language. Here, we investigated how both these factors may contribute jointly to the acquisition of sign language vocabulary by hearing non-signers. Participants were presented with three types of sign in NGT (Sign Language of the Netherlands): arbitrary signs, iconic signs with high or low gesture overlap. Signs that were both iconic and highly overlapping with gestures boosted learning most at first exposure, and this effect remained the day after. Findings highlight the influence of modality-specific factors supporting the acquisition of a signed lexicon.
  • Karsan, Ç., Ocak, F., & Bulut, T. (2024). Characterization of speech and language phenotype in the 8p23.1 syndrome. European Child & Adolescent Psychiatry. Advance online publication. doi:10.1007/s00787-024-02448-0.

    Abstract

    The 8p23.1 duplication syndrome is a rare genetic condition with an estimated prevalence rate of 1 out of 58,000. Although the syndrome was associated with speech and language delays, a comprehensive assessment of speech and language functions has not been undertaken in this population. To address this issue, the present study reports rigorous speech and language, in addition to oral-facial and developmental, assessment of a 50-month-old Turkish-speaking boy who was diagnosed with the 8p23.1 duplication syndrome. Standardized tests of development, articulation and phonology, receptive and expressive language and a language sample analysis were administered to characterize speech and language skills in the patient. The language sample was obtained in an ecologically valid, free play and conversation context. The language sample was then analyzed and compared to a database of age-matched typically-developing children (n = 33) in terms of intelligibility, morphosyntax, semantics/vocabulary, discourse, verbal facility and percentage of errors at word and utterance levels. The results revealed mild to severe problems in articulation and phonology, receptive and expressive language skills, and morphosyntax (mean length of utterance in morphemes). Future research with larger sample sizes and employing detailed speech and language assessment is needed to delineate the speech and language profile in individuals with the 8p23.1 duplication syndrome, which will guide targeted speech and language interventions.
  • Kempen, G., & Harbusch, K. (2003). A corpus study into word order variation in German subordinate clauses: Animacy affects linearization independently of function assignment. In Proceedings of AMLaP 2003 (pp. 153-154). Glasgow: Glasgow University.
  • Kempen, G. (1991). Conjunction reduction and gapping in clause-level coordination: An inheritance-based approach. Computational Intelligence, 7, 357-360. doi:10.1111/j.1467-8640.1991.tb00406.x.
  • Kempen, G. (1988). De netwerker: Spin in het web of rat in een doolhof? In SURF in theorie en praktijk: Van personal tot supercomputer (pp. 59-61). Amsterdam: Elsevier Science Publishers.
  • Kempen, G. (1997). De ontdubbelde taalgebruiker: Maken taalproductie en taalperceptie gebruik van één en dezelfde syntactische processor? [Abstract]. In 6e Winter Congres NvP. Programma and abstracts (pp. 31-32). Nederlandse Vereniging voor Psychonomie.
  • Kempen, G., Kooij, A., & Van Leeuwen, T. (1997). Do skilled readers exploit inflectional spelling cues that do not mirror pronunciation? An eye movement study of morpho-syntactic parsing in Dutch. In Abstracts of the Orthography Workshop "What spelling changes". Nijmegen: Max Planck Institute for Psycholinguistics.
  • Kempen, G. (2000). Could grammatical encoding and grammatical decoding be subserved by the same processing module? Behavioral and Brain Sciences, 23, 38-39.
  • Kempen, G., & Harbusch, K. (2003). An artificial opposition between grammaticality and frequency: Comment on Bornkessel, Schlesewsky & Friederici (2002). Cognition, 90(2), 205-210 [Rectification on p. 215]. doi:10.1016/S0010-0277(03)00145-8.

    Abstract

    In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim “show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing” (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.
  • Kempen, G., & Kolk, H. (1986). Het voortbrengen van normale en agrammatische taal. Van Horen Zeggen, 27(2), 36-40.
  • Kempen, G. (1986). RIKS: Kennistechnologisch centrum voor bedrijfsleven en wetenschap. Informatie, 28, 122-125.
  • Kempen, G. (1988). Preface. Acta Psychologica, 69(3), 205-206. doi:10.1016/0001-6918(88)90032-7.
  • Kempen, G. (1997). Van taalbarrières naar linguïstische snelwegen: Inrichting van een technische taalinfrastructuur voor het Nederlands. Grenzen aan veeltaligheid: Taalgebruik en bestuurlijke doeltreffendheid in de instellingen van de Europese Unie, 43-48.
  • Kemps-Snijders, M., Klassmann, A., Zinn, C., Berck, P., Russel, A., & Wittenburg, P. (2008). Exploring and enriching a language resource archive via the web. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    The ”download first, then process paradigm” is still the predominant working method amongst the research community. The web-based paradigm, however, offers many advantages from a tool development and data management perspective as they allow a quick adaptation to changing research environments. Moreover, new ways of combining tools and data are increasingly becoming available and will eventually enable a true web-based workflow approach, thus challenging the ”download first, then process” paradigm. The necessary infrastructure for managing, exploring and enriching language resources via the Web will need to be delivered by projects like CLARIN and DARIAH

Share this page