Publications

Displaying 301 - 400 of 964
  • Guo, Y., Martin, R. C., Hamilton, C., Van Dyke, J., & Tan, Y. (2010). Neural basis of semantic and syntactic interference resolution in sentence comprehension. Procedia - Social and Behavioral Sciences, 6, 88-89. doi:10.1016/j.sbspro.2010.08.045.
  • Guzmán Chacón, E., Ovando-Tellez, M., Thiebaut de Schotten, M., & Forkel, S. J. (2024). Embracing digital innovation in neuroscience: 2023 in review at NEUROCCINO. Brain Structure & Function, 229, 251-255. doi:10.1007/s00429-024-02768-6.
  • Haghani, A., Li, C. Z., Robeck, T. R., Zhang, J., Lu, A. T., Ablaeva, J., Acosta-Rodríguez, V. A., Adams, D. M., Alagaili, A. N., Almunia, J., Aloysius, A., Amor, N. M. S., Ardehali, R., Arneson, A., Baker, C. S., Banks, G., Belov, K., Bennett, N. C., Black, P., Blumstein, D. T. and 170 moreHaghani, A., Li, C. Z., Robeck, T. R., Zhang, J., Lu, A. T., Ablaeva, J., Acosta-Rodríguez, V. A., Adams, D. M., Alagaili, A. N., Almunia, J., Aloysius, A., Amor, N. M. S., Ardehali, R., Arneson, A., Baker, C. S., Banks, G., Belov, K., Bennett, N. C., Black, P., Blumstein, D. T., Bors, E. K., Breeze, C. E., Brooke, R. T., Brown, J. L., Carter, G., Caulton, A., Cavin, J. M., Chakrabarti, L., Chatzistamou, I., Chavez, A. S., Chen, H., Cheng, K., Chiavellini, P., Choi, O.-W., Clarke, S., Cook, J. A., Cooper, L. N., Cossette, M.-L., Day, J., DeYoung, J., Dirocco, S., Dold, C., Dunnum, J. L., Ehmke, E. E., Emmons, C. K., Emmrich, S., Erbay, E., Erlacher-Reid, C., Faulkes, C. G., Fei, Z., Ferguson, S. H., Finno, C. J., Flower, J. E., Gaillard, J.-M., Garde, E., Gerber, L., Gladyshev, V. N., Goya, R. G., Grant, M. J., Green, C. B., Hanson, M. B., Hart, D. W., Haulena, M., Herrick, K., Hogan, A. N., Hogg, C. J., Hore, T. A., Huang, T., Izpisua Belmonte, J. C., Jasinska, A. J., Jones, G., Jourdain, E., Kashpur, O., Katcher, H., Katsumata, E., Kaza, V., Kiaris, H., Kobor, M. S., Kordowitzki, P., Koski, W. R., Krützen, M., Kwon, S. B., Larison, B., Lee, S.-G., Lehmann, M., Lemaître, J.-F., Levine, A. J., Li, X., Li, C., Lim, A. R., Lin, D. T. S., Lindemann, D. M., Liphardt, S. W., Little, T. J., Macoretta, N., Maddox, D., Matkin, C. O., Mattison, J. A., McClure, M., Mergl, J., Meudt, J. J., Montano, G. A., Mozhui, K., Munshi-South, J., Murphy, W. J., Naderi, A., Nagy, M., Narayan, P., Nathanielsz, P. W., Nguyen, N. B., Niehrs, C., Nyamsuren, B., O’Brien, J. K., Ginn, P. O., Odom, D. T., Ophir, A. G., Osborn, S., Ostrander, E. A., Parsons, K. M., Paul, K. C., Pedersen, A. B., Pellegrini, M., Peters, K. J., Petersen, J. L., Pietersen, D. W., Pinho, G. M., Plassais, J., Poganik, J. R., Prado, N. A., Reddy, P., Rey, B., Ritz, B. R., Robbins, J., Rodriguez, M., Russell, J., Rydkina, E., Sailer, L. L., Salmon, A. B., Sanghavi, A., Schachtschneider, K. M., Schmitt, D., Schmitt, T., Schomacher, L., Schook, L. B., Sears, K. E., Seifert, A. W., Shafer, A. B. A., Shindyapina, A. V., Simmons, M., Singh, K., Sinha, I., Slone, J., Snell, R. G., Soltanmohammadi, E., Spangler, M. L., Spriggs, M., Staggs, L., Stedman, N., Steinman, K. J., Stewart, D. T., Sugrue, V. J., Szladovits, B., Takahashi, J. S., Takasugi, M., Teeling, E. C., Thompson, M. J., Van Bonn, B., Vernes, S. C., Villar, D., Vinters, H. V., Vu, H., Wallingford, M. C., Wang, N., Wilkinson, G. S., Williams, R. W., Yan, Q., Yao, M., Young, B. G., Zhang, B., Zhang, Z., Zhao, Y., Zhao, P., Zhou, W., Zoller, J. A., Ernst, J., Seluanov, A., Gorbunova, V., Yang, X. W., Raj, K., & Horvath, S. (2023). DNA methylation networks underlying mammalian traits. Science, 381(6658): eabq5693. doi:10.1126/science.abq5693.

    Abstract

    INTRODUCTION
    Comparative epigenomics is an emerging field that combines epigenetic signatures with phylogenetic relationships to elucidate species characteristics such as maximum life span. For this study, we generated cytosine DNA methylation (DNAm) profiles (n = 15,456) from 348 mammalian species using a methylation array platform that targets highly conserved cytosines.
    RATIONALE
    Nature has evolved mammalian species of greatly differing life spans. To resolve the relationship of DNAm with maximum life span and phylogeny, we performed a large-scale cross-species unsupervised analysis. Comparative studies in many species enables the identification of epigenetic correlates of maximum life span and other traits.
    RESULTS
    We first tested whether DNAm levels in highly conserved cytosines captured phylogenetic relationships among species. We constructed phyloepigenetic trees that paralleled the traditional phylogeny. To avoid potential confounding by different tissue types, we generated tissue-specific phyloepigenetic trees. The high phyloepigenetic-phylogenetic congruence is due to differences in methylation levels and is not confounded by sequence conservation.
    We then interrogated the extent to which DNA methylation associates with specific biological traits. We used an unsupervised weighted correlation network analysis (WGCNA) to identify clusters of highly correlated CpGs (comethylation modules). WGCNA identified 55 distinct comethylation modules, of which 30 were significantly associated with traits including maximum life span, adult weight, age, sex, human mortality risk, or perturbations that modulate murine life span.
    Both the epigenome-wide association analysis (EWAS) and eigengene-based analysis identified methylation signatures of maximum life span, and most of these were independent of aging, presumably set at birth, and could be stable predictors of life span at any point in life. Several CpGs that are more highly methylated in long-lived species are located near HOXL subclass homeoboxes and other genes that play a role in morphogenesis and development. Some of these life span–related CpGs are located next to genes that are also implicated in our analysis of upstream regulators (e.g., ASCL1 and SMAD6). CpGs with methylation levels that are inversely related to life span are enriched in transcriptional start site (TSS1) and promoter flanking (PromF4, PromF5) associated chromatin states. Genes located in chromatin state TSS1 are constitutively active and enriched for nucleic acid metabolic processes. This suggests that long-living species evolved mechanisms that maintain low methylation levels in these chromatin states that would favor higher expression levels of genes essential for an organism’s survival.
    The upstream regulator analysis of the EWAS of life span identified the pluripotency transcription factors OCT4, SOX2, and NANOG. Other factors, such as POLII, CTCF, RAD21, YY1, and TAF1, showed the strongest enrichment for negatively life span–related CpGs.
    CONCLUSION
    The phyloepigenetic trees indicate that divergence of DNA methylation profiles closely parallels that of genetics through evolution. Our results demonstrate that DNA methylation is subjected to evolutionary pressures and selection. The publicly available data from our Mammalian Methylation Consortium are a rich source of information for different fields such as evolutionary biology, developmental biology, and aging.
  • Hagoort, P. (2005). On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9(9), 416-423. doi:10.1016/j.tics.2005.07.004.

    Abstract

    In speaking and comprehending language, word information is retrieved from memory and combined into larger units (unification). Unification operations take place in parallel at the semantic, syntactic and phonological levels of processing. This article proposes a new framework that connects psycholinguistic models to a neurobiological account of language. According to this proposal the left inferior frontal gyrus (LIFG) plays an important role in unification. Research in other domains of cognition indicates that left prefrontal cortex has the necessary neurobiological characteristics for its involvement in the unification for language. I offer here a psycholinguistic perspective on the nature of language unification and the role of LIFG.
  • Hagoort, P. (2023). The language marker hypothesis. Cognition, 230: 105252. doi:10.1016/j.cognition.2022.105252.

    Abstract

    According to the language marker hypothesis language has provided homo sapiens with a rich symbolic system that plays a central role in interpreting signals delivered by our sensory apparatus, in shaping action goals, and in creating a powerful tool for reasoning and inferencing. This view provides an important correction on embodied accounts of language that reduce language to action, perception, emotion and mental simulation. The presence of a language system has, however, also important consequences for perception, action, emotion, and memory. Language stamps signals from perception, action, and emotional systems with rich cognitive markers that transform the role of these signals in the overall cognitive architecture of the human mind. This view does not deny that language is implemented by means of universal principles of neural organization. However, language creates the possibility to generate rich internal models of the world that are shaped and made accessible by the characteristics of a language system. This makes us less dependent on direct action-perception couplings and might even sometimes go at the expense of the veridicality of perception. In cognitive (neuro)science the pendulum has swung from language as the key to understand the organization of the human mind to the perspective that it is a byproduct of perception and action. It is time that it partly swings back again.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P. (2005). De talige aap. Linguaan, 26-35.
  • Hagoort, P., Hald, L. A., Bastiaansen, M. C. M., & Petersson, K. M. (2004). Integration of word meaning and world knowledge in language comprehension. Science, 304(5669), 438-441. doi:10.1126/science.1095455.

    Abstract

    Although the sentences that we hear or read have meaning, this does not necessarily mean that they are also true. Relatively little is known about the critical brain structures for, and the relative time course of, establishing the meaning and truth of linguistic expressions. We present electroencephalogram data that show the rapid parallel integration of both semantic and world
    knowledge during the interpretation of a sentence. Data from functional magnetic resonance imaging revealed that the left inferior prefrontal cortex is involved in the integration of both meaning and world knowledge. Finally, oscillatory brain responses indicate that the brain keeps a record of what makes a sentence hard to interpret.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (1992). Vertraagde lexicale integratie bij afatisch taalverstaan. Stem, Spraak- en Taalpathologie, 1, 5-23.
  • Hagoort, P., & Özyürek, A. (2024). Extending the architecture of language from a multimodal perspective. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12728.

    Abstract

    Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation.
  • Hamilton, A., & Holler, J. (Eds.). (2023). Face2face: Advancing the science of social interaction [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences. Retrieved from https://royalsocietypublishing.org/toc/rstb/2023/378/1875.

    Abstract

    Face to face interaction is fundamental to human sociality but is very complex to study in a scientific fashion. This theme issue brings together cutting-edge approaches to the study of face-to-face interaction and showcases how we can make progress in this area. Researchers are now studying interaction in adult conversation, parent-child relationships, neurodiverse groups, interactions with virtual agents and various animal species. The theme issue reveals how new paradigms are leading to more ecologically grounded and comprehensive insights into what social interaction is. Scientific advances in this area can lead to improvements in education and therapy, better understanding of neurodiversity and more engaging artificial agents
  • Hamilton, A., & Holler, J. (2023). Face2face: Advancing the science of social interaction. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210470. doi:10.1098/rstb.2021.0470.

    Abstract

    Face-to-face interaction is core to human sociality and its evolution, and provides the environment in which most of human communication occurs. Research into the full complexities that define face-to-face interaction requires a multi-disciplinary, multi-level approach, illuminating from different perspectives how we and other species interact. This special issue showcases a wide range of approaches, bringing together detailed studies of naturalistic social-interactional behaviour with larger scale analyses for generalization, and investigations of socially contextualized cognitive and neural processes that underpin the behaviour we observe. We suggest that this integrative approach will allow us to propel forwards the science of face-to-face interaction by leading us to new paradigms and novel, more ecologically grounded and comprehensive insights into how we interact with one another and with artificial agents, how differences in psychological profiles might affect interaction, and how the capacity to socially interact develops and has evolved in the human and other species. This theme issue makes a first step into this direction, with the aim to break down disciplinary boundaries and emphasizing the value of illuminating the many facets of face-to-face interaction.
  • Hammarström, H. (2010). A full-scale test of the language farming dispersal hypothesis. Diachronica, 27(2), 197-213. doi:10.1075/dia.27.2.02ham.

    Abstract

    One attempt at explaining why some language families are large (while others are small) is the hypothesis that the families that are now large became large because their ancestral speakers had a technological advantage, most often agriculture. Variants of this idea are referred to as the Language Farming Dispersal Hypothesis. Previously, detailed language family studies have uncovered various supporting examples and counterexamples to this idea. In the present paper I weigh the evidence from ALL attested language families. For each family, I use the number of member languages as a measure of cardinal size, member language coordinates to measure geospatial size and ethnographic evidence to assess subsistence status. This data shows that, although agricultural families tend to be larger in cardinal size, their size is hardly due to the simple presence of farming. If farming were responsible for language family expansions, we would expect a greater east-west geospatial spread of large families than is actually observed. The data, however, is compatible with weaker versions of the farming dispersal hypothesis as well with models where large families acquire farming because of their size, rather than the other way around.
  • Hammarström, H. (2010). The status of the least documented language families in the world. Language Documentation and Conservation, 4, 177-212. Retrieved from http://hdl.handle.net/10125/4478.

    Abstract

    This paper aims to list all known language families that are not yet extinct and all of whose member languages are very poorly documented, i.e., less than a sketch grammar’s worth of data has been collected. It explains what constitutes a valid family, what amount and kinds of documentary data are sufficient, when a language is considered extinct, and more. It is hoped that the survey will be useful in setting priorities for documentation fieldwork, in particular for those documentation efforts whose underlying goal is to understand linguistic diversity.
  • Hanulikova, A., & Hamann, S. (2010). Illustrations of Slovak IPA. Journal of the International Phonetic Association, 40(3), 373-378. doi:10.1017/S0025100310000162.

    Abstract

    Slovak (sometimes also called Slovakian) is an Indo-European language belonging to the West-Slavic branch, and is most closely related to Czech. Slovak is spoken as a native language by 4.6 million speakers in Slovakia (that is by roughly 85% of the population), and by over two million Slovaks living abroad, most of them in the USA, the Czech Republic, Hungary, Canada and Great Britain (Office for Slovaks Living Abroad 2009).
  • Hanulikova, A., McQueen, J. M., & Mitterer, H. (2010). Possible words and fixed stress in the segmentation of Slovak speech. Quarterly Journal of Experimental Psychology, 63, 555 -579. doi:10.1080/17470210903038958.

    Abstract

    The possible-word constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997) has been proposed as a language-universal segmentation principle: Lexical candidates are disfavoured if the resulting segmentation of continuous speech leads to vowelless residues in the input—for example, single consonants. Three word-spotting experiments investigated segmentation in Slovak, a language with single-consonant words and fixed stress. In Experiment 1, Slovak listeners detected real words such as ruka “hand” embedded in prepositional-consonant contexts (e.g., /gruka/) faster than those in nonprepositional-consonant contexts (e.g., /truka/) and slowest in syllable contexts (e.g., /dugruka/). The second experiment controlled for effects of stress. Responses were still fastest in prepositional-consonant contexts, but were now slowest in nonprepositional-consonant contexts. In Experiment 3, the lexical and syllabic status of the contexts was manipulated. Responses were again slowest in nonprepositional-consonant contexts but equally fast in prepositional-consonant, prepositional-vowel, and nonprepositional-vowel contexts. These results suggest that Slovak listeners use fixed stress and the PWC to segment speech, but that single consonants that can be words have a special status in Slovak segmentation. Knowledge about what constitutes a phonologically acceptable word in a given language therefore determines whether vowelless stretches of speech are or are not treated as acceptable parts of the lexical parse.
  • Harmon, Z., Barak, L., Shafto, P., Edwards, J., & Feldman, N. H. (2023). The competition-compensation account of developmental language disorder. Developmental Science, 26(4): e13364. doi:10.1111/desc.13364.

    Abstract

    Children with developmental language disorder (DLD) regularly use the bare form of verbs (e.g., dance) instead of inflected forms (e.g., danced). We propose an account of this behavior in which processing difficulties of children with DLD disproportionally affect processing novel inflected verbs in their input. Limited experience with inflection in novel contexts leads the inflection to face stronger competition from alternatives. Competition is resolved through a compensatory behavior that involves producing a more accessible alternative: in English, the bare form. We formalize this hypothesis within a probabilistic model that trades off context-dependent versus independent processing. Results show an over-reliance on preceding stem contexts when retrieving the inflection in a model that has difficulty with processing novel inflected forms. We further show that following the introduction of a bias to store and retrieve forms with preceding contexts, generalization in the typically developing (TD) models remains more or less stable, while the same bias in the DLD models exaggerates difficulties with generalization. Together, the results suggest that inconsistent use of inflectional morphemes by children with DLD could stem from inferences they make on the basis of data containing fewer novel inflected forms. Our account extends these findings to suggest that problems with detecting a form in novel contexts combined with a bias to rely on familiar contexts when retrieving a form could explain sequential planning difficulties in children with DLD.
  • Hartmann, S., Wacewicz, S., Ravignani, A., Valente, D., Rodrigues, E. D., Asano, R., & Jadoul, Y. (2024). Delineating the field of language evolution research: A quantitative analysis of peer-review patterns at the Joint Conference on Language Evolution (JCoLE 2022). Interaction studies, 25(1), 100-117. doi:10.1075/is.00024.har.

    Abstract

    Research on language evolution is an established subject area yet permeated by terminological controversies about which topics should be considered pertinent to the field and which not. By consequence, scholars focusing on language evolution struggle in providing precise demarcations of the discipline, where even the very central notions of evolution and language are elusive. We aimed at providing a data-driven characterisation of language evolution as a field of research by relying on quantitative analysis of data drawn from 697 reviews on 255 submissions from the Joint Conference on Language Evolution 2022 (Kanazawa, Japan). Our results delineate a field characterized by a core of main research topics such as iconicity, sign language, multimodality. Despite being explored within the framework of language evolution research, only very recently these topics became popular in linguistics. As a result, language evolution has the potential to emerge as a forefront of linguistic research, bringing innovation to the study of language. We also see the emergence of more recent topics like rhythm, music, and vocal learning. Furthermore, the community identifies cognitive science, primatology, archaeology, palaeoanthropology, and genetics as key areas, encouraging empirical rather than theoretical work. With new themes, models, and methodologies emerging, our results depict an intrinsically multidisciplinary and evolving research field, likely adapting as language itself.
  • Haun, D. B. M., Allen, G. L., & Wedell, D. H. (2005). Bias in spatial memory: A categorical endorsement. Acta Psychologica, 118(1-2), 149-170. doi:10.1016/j.actpsy.2004.10.011.
  • Haun, D. B. M., Jordan, F., Vallortigara, G., & Clayton, N. S. (2010). Origins of spatial, temporal and numerical cognition: Insights from comparative psychology [Review article]. Trends in Cognitive Sciences, 14, 552-560. doi:10.1016/j.tics.2010.09.006.

    Abstract

    Contemporary comparative cognition has a large repertoire of animal models and methods, with concurrent theoretical advances that are providing initial answers to crucial questions about human cognition. What cognitive traits are uniquely human? What are the species-typical inherited predispositions of the human mind? What is the human mind capable of without certain types of specific experiences with the surrounding environment? Here, we review recent findings from the domains of space, time and number cognition. These findings are produced using different comparative methodologies relying on different animal species, namely birds and non-human great apes. The study of these species not only reveals the range of cognitive abilities across vertebrates, but also increases our understanding of human cognition in crucial ways.
  • Hay, J. B., & Baayen, R. H. (2005). Shifting paradigms: Gradient structure in morphology. Trends in Cognitive Sciences, 9(7), 342-348. doi:10.1016/j.tics.2005.04.002.

    Abstract

    Morphology is the study of the internal structure of words. A vigorous ongoing debate surrounds the question of how such internal structure is best accounted for: by means of lexical entries and deterministic symbolic rules, or by means of probabilistic subsymbolic networks implicitly encoding structural similarities in connection weights. In this review, we separate the question of subsymbolic versus symbolic implementation from the question of deterministic versus probabilistic structure. We outline a growing body of evidence, mostly external to the above debate, indicating that morphological structure is indeed intrinsically graded. By allowing probability into the grammar, progress can be made towards solving some long-standing puzzles in morphological theory.
  • Hayano, K. (2004). Kaiwa ni okeru ninshikiteki ken’i no koushou: Shuujoshi yo, ne, odoroki hyouji no bunpu to kinou [Negotiation of Epistemic Authority in Conversation: on the use of final particles yo, ne and surprise markers]. Studies in Pragmatics, 6, 17-28.
  • Hegemann, L., Corfield, E. C., Askelund, A. D., Allegrini, A. G., Askeland, R. B., Ronald, A., Ask, H., St Pourcain, B., Andreassen, O. A., Hannigan, L. J., & Havdahl, A. (2024). Genetic and phenotypic heterogeneity in early neurodevelopmental traits in the Norwegian Mother, Father and Child Cohort Study. Molecular Autism, 15: 25. doi:10.1186/s13229-024-00599-0.

    Abstract

    Background
    Autism and different neurodevelopmental conditions frequently co-occur, as do their symptoms at sub-diagnostic threshold levels. Overlapping traits and shared genetic liability are potential explanations.

    Methods
    In the population-based Norwegian Mother, Father, and Child Cohort study (MoBa), we leverage item-level data to explore the phenotypic factor structure and genetic architecture underlying neurodevelopmental traits at age 3 years (N = 41,708–58,630) using maternal reports on 76 items assessing children’s motor and language development, social functioning, communication, attention, activity regulation, and flexibility of behaviors and interests.

    Results
    We identified 11 latent factors at the phenotypic level. These factors showed associations with diagnoses of autism and other neurodevelopmental conditions. Most shared genetic liabilities with autism, ADHD, and/or schizophrenia. Item-level GWAS revealed trait-specific genetic correlations with autism (items rg range = − 0.27–0.78), ADHD (items rg range = − 0.40–1), and schizophrenia (items rg range = − 0.24–0.34). We find little evidence of common genetic liability across all neurodevelopmental traits but more so for several genetic factors across more specific areas of neurodevelopment, particularly social and communication traits. Some of these factors, such as one capturing prosocial behavior, overlap with factors found in the phenotypic analyses. Other areas, such as motor development, seemed to have more heterogenous etiology, with specific traits showing a less consistent pattern of genetic correlations with each other.

    Conclusions
    These exploratory findings emphasize the etiological complexity of neurodevelopmental traits at this early age. In particular, diverse associations with neurodevelopmental conditions and genetic heterogeneity could inform follow-up work to identify shared and differentiating factors in the early manifestations of neurodevelopmental traits and their relation to autism and other neurodevelopmental conditions. This in turn could have implications for clinical screening tools and programs.
  • Heid, I. M., Henneman, P., Hicks, A., Coassin, S., Winkler, T., Aulchenko, Y. S., Fuchsberger, C., Song, K., Hivert, M.-F., Waterworth, D. M., Timpson, N. J., Richards, J. B., Perry, J. R. B., Tanaka, T., Amin, N., Kollerits, B., Pichler, I., Oostra, B. A., Thorand, B., Frants, R. R. and 22 moreHeid, I. M., Henneman, P., Hicks, A., Coassin, S., Winkler, T., Aulchenko, Y. S., Fuchsberger, C., Song, K., Hivert, M.-F., Waterworth, D. M., Timpson, N. J., Richards, J. B., Perry, J. R. B., Tanaka, T., Amin, N., Kollerits, B., Pichler, I., Oostra, B. A., Thorand, B., Frants, R. R., Illig, T., Dupuis, J., Glaser, B., Spector, T., Guralnik, J., Egan, J. M., Florez, J. C., Evans, D. M., Soranzo, N., Bandinelli, S., Carlson, O. D., Frayling, T. M., Burling, K., Smith, G. D., Mooser, V., Ferrucci, L., Meigs, J. B., Vollenweider, P., Dijk, K. W. v., Pramstaller, P., Kronenberg, F., & van Duijn, C. M. (2010). Clear detection of ADIPOQ locus as the major gene for plasma adiponectin: Results of genome-wide association analyses including 4659 European individuals. Atherosclerosis, 208(2), 412-420. doi:10.1016/j.atherosclerosis.2009.11.035.

    Abstract

    OBJECTIVE: Plasma adiponectin is strongly associated with various components of metabolic syndrome, type 2 diabetes and cardiovascular outcomes. Concentrations are highly heritable and differ between men and women. We therefore aimed to investigate the genetics of plasma adiponectin in men and women. METHODS: We combined genome-wide association scans of three population-based studies including 4659 persons. For the replication stage in 13795 subjects, we selected the 20 top signals of the combined analysis, as well as the 10 top signals with p-values less than 1.0 x 10(-4) for each the men- and the women-specific analyses. We further selected 73 SNPs that were consistently associated with metabolic syndrome parameters in previous genome-wide association studies to check for their association with plasma adiponectin. RESULTS: The ADIPOQ locus showed genome-wide significant p-values in the combined (p=4.3 x 10(-24)) as well as in both women- and men-specific analyses (p=8.7 x 10(-17) and p=2.5 x 10(-11), respectively). None of the other 39 top signal SNPs showed evidence for association in the replication analysis. None of 73 SNPs from metabolic syndrome loci exhibited association with plasma adiponectin (p>0.01). CONCLUSIONS: We demonstrated the ADIPOQ gene as the only major gene for plasma adiponectin, which explains 6.7% of the phenotypic variance. We further found that neither this gene nor any of the metabolic syndrome loci explained the sex differences observed for plasma adiponectin. Larger studies are needed to identify more moderate genetic determinants of plasma adiponectin.
  • Heim, F., Fisher, S. E., Scharff, C., Ten Cate, C., & Riebel, K. (2023). Effects of cortical FoxP1 knockdowns on learned song preference in female zebra finches. eNeuro, 10(3): ENEURO.0328-22.2023. doi:10.1523/ENEURO.0328-22.2023.

    Abstract

    The search for molecular underpinnings of human vocal communication has focused on genes encoding forkhead-box transcription factors, as rare disruptions of FOXP1, FOXP2, and FOXP4 have been linked to disorders involving speech and language deficits. In male songbirds, an animal model for vocal learning, experimentally altered expression levels of these transcription factors impair song production learning. The relative contributions of auditory processing, motor function or auditory-motor integration to the deficits observed after different FoxP manipulations in songbirds are unknown. To examine the potential effects on auditory learning and development, we focused on female zebra finches (Taeniopygia guttata) that do not sing but develop song memories, which can be assayed in operant preference tests. We tested whether the relatively high levels of FoxP1 expression in forebrain areas implicated in female song preference learning are crucial for the development and/or maintenance of this behavior. Juvenile and adult female zebra finches received FoxP1 knockdowns targeted to HVC (proper name) or to the caudomedial mesopallium (CMM). Irrespective of target site and whether the knockdown took place before (juveniles) or after (adults) the sensitive phase for song memorization, all groups preferred their tutor’s song. However, adult females with FoxP1 knockdowns targeted at HVC showed weaker motivation to hear song and weaker song preferences than sham-treated controls, while no such differences were observed after knockdowns in CMM or in juveniles. In summary, FoxP1 knockdowns in the cortical song nucleus HVC were not associated with impaired tutor song memory but reduced motivation to actively request tutor songs.
  • Heim, F., Scharff, C., Fisher, S. E., Riebel, K., & Ten Cate, C. (2024). Auditory discrimination learning and acoustic cue weighing in female zebra finches with localized FoxP1 knockdowns. Journal of Neurophysiology, 131, 950-963. doi:10.1152/jn.00228.2023.

    Abstract

    Rare disruptions of the transcription factor FOXP1 are implicated in a human neurodevelopmental disorder characterized by autism and/or intellectual disability with prominent problems in speech and language abilities. Avian orthologues of this transcription factor are evolutionarily conserved and highly expressed in specific regions of songbird brains, including areas associated with vocal production learning and auditory perception. Here, we investigated possible contributions of FoxP1 to song discrimination and auditory perception in juvenile and adult female zebra finches. They received lentiviral knockdowns of FoxP1 in one of two brain areas involved in auditory stimulus processing, HVC (proper name) or CMM (caudomedial mesopallium). Ninety-six females, distributed over different experimental and control groups were trained to discriminate between two stimulus songs in an operant Go/Nogo paradigm and subsequently tested with an array of stimuli. This made it possible to assess how well they recognized and categorized altered versions of training stimuli and whether localized FoxP1 knockdowns affected the role of different features during discrimination and categorization of song. Although FoxP1 expression was significantly reduced by the knockdowns, neither discrimination of the stimulus songs nor categorization of songs modified in pitch, sequential order of syllables or by reversed playback were affected. Subsequently, we analyzed the full dataset to assess the impact of the different stimulus manipulations for cue weighing in song discrimination. Our findings show that zebra finches rely on multiple parameters for song discrimination, but with relatively more prominent roles for spectral parameters and syllable sequencing as cues for song discrimination.

    NEW & NOTEWORTHY In humans, mutations of the transcription factor FoxP1 are implicated in speech and language problems. In songbirds, FoxP1 has been linked to male song learning and female preference strength. We found that FoxP1 knockdowns in female HVC and caudomedial mesopallium (CMM) did not alter song discrimination or categorization based on spectral and temporal information. However, this large dataset allowed to validate different cue weights for spectral over temporal information for song recognition.
  • Heinemann, T. (2010). The question–response system of Danish. Journal of Pragmatics, 42, 2703-2725. doi:10.1016/j.pragma.2010.04.007.

    Abstract

    This paper provides an overview of the question–response system of Danish, based on a collection of 350 questions (and responses) collected from video recordings of naturally occurring face-to-face interactions between native speakers of Danish. The paper identifies the lexico-grammatical options for formulating questions, the range of social actions that can be implemented through questions and the relationship between questions and responses. It further describes features where Danish questions differ from a range of other languages in terms of, for instance, distribution and the relationship between question format and social action. For instance, Danish has a high frequency of interrogatively formatted questions and questions that are negatively formulated, when compared to languages that have the same grammatical options. In terms of action, Danish shows a higher number of questions that are used for making suggestions, offers and requests and does not use repetition as a way of answering a question as often as other languages.
  • Hellwig, B., Allen, S. E. M., Davidson, L., Defina, R., Kelly, B. F., & Kidd, E. (Eds.). (2023). The acquisition sketch project [Special Issue]. Language Documentation and Conservation Special Publication, 28.

    Abstract

    This special publication aims to build a renewed enthusiasm for collecting acquisition data across many languages, including those facing endangerment and loss. It presents a guide for documenting and describing child language and child-directed language in diverse languages and cultures, as well as a collection of acquisition sketches based on this guide. The guide is intended for anyone interested in working across child language and language documentation, including, for example, field linguists and language documenters, community language workers, child language researchers or graduate students.
  • Hellwig, B., Allen, S. E. M., Davidson, L., Defina, R., Kelly, B. F., & Kidd, E. (2023). Introduction: The acquisition sketch project. Language Documentation and Conservation Special Publication, 28, 1-3. Retrieved from https://hdl.handle.net/10125/74718.
  • Henke, L., Lewis, A. G., & Meyer, L. (2023). Fast and slow rhythms of naturalistic reading revealed by combined eye-tracking and electroencephalography. The Journal of Neuroscience, 43(24), 4461-4469. doi:10.1523/JNEUROSCI.1849-22.2023.

    Abstract

    Neural oscillations are thought to support speech and language processing. They may not only inherit acoustic rhythms, but might also impose endogenous rhythms onto processing. In support of this, we here report that human (both male and female) eye movements during naturalistic reading exhibit rhythmic patterns that show frequency-selective coherence with the EEG, in the absence of any stimulation rhythm. Periodicity was observed in two distinct frequency bands: First, word-locked saccades at 4-5 Hz display coherence with whole-head theta-band activity. Second, fixation durations fluctuate rhythmically at ∼1 Hz, in coherence with occipital delta-band activity. This latter effect was additionally phase-locked to sentence endings, suggesting a relationship with the formation of multi-word chunks. Together, eye movements during reading contain rhythmic patterns that occur in synchrony with oscillatory brain activity. This suggests that linguistic processing imposes preferred processing time scales onto reading, largely independent of actual physical rhythms in the stimulus.
  • Heritage, J., Elliott, M. N., Stivers, T., Richardson, A., & Mangione-Smith, R. (2010). Reducing inappropriate antibiotics prescribing: The role of online commentary on physical examination findings. Patient Education and Counseling, 81, 119-125. doi:10.1016/j.pec.2009.12.005.

    Abstract

    Objective: This study investigates the relationship of ‘online commentary’(contemporaneous physician comments about physical examination [PE] findings) with (i) parent questioning of the treatment recommendation and (ii) inappropriate antibiotic prescribing. Methods: A nested cross-sectional study of 522 encounters motivated by upper respiratory symptoms in 27 California pediatric practices (38 pediatricians). Physicians completed a post-visit survey regarding physical examination findings, diagnosis, treatment, and whether they perceived the parent as expecting an antibiotic. Taped encounters were coded for ‘problem’ online commentary (PE findings discussed as significant or clearly abnormal) and ‘no problem’ online commentary (PE findings discussed reassuringly as normal or insignificant). Results: Online commentary during the PE occurred in 73% of visits with viral diagnoses (n = 261). Compared to similar cases with ‘no problem’ online commentary, ‘problem’ comments were associated with a 13% greater probability of parents uestioning a non-antibiotic treatment plan (95% CI 0-26%, p = .05,) and a 27% (95% CI: 2-52%, p < .05) greater probability of an inappropriate antibiotic prescription. Conclusion: With viral illnesses, problematic online comments are associated with more pediatrician-parent conflict over non-antibiotic treatment recommendations. This may increase inappropriate antibiotic prescribing. Practice implications: In viral cases, physicians should consider avoiding the use of problematic online commentary.
  • Hersh, T. A., Ravignani, A., & Burchardt, L. (2023). Robust rhythm reporting will advance ecological and evolutionary research. Methods in Ecology and Evolution, 14(6), 1398-1407. doi:10.1111/2041-210X.14118.

    Abstract


    Rhythmicity in the millisecond to second range is a fundamental building block of communication and coordinated movement. But how widespread are rhythmic capacities across species, and how did they evolve under different environmental pressures? Comparative research is necessary to answer these questions but has been hindered by limited crosstalk and comparability among results from different study species.
    Most acoustics studies do not explicitly focus on characterising or quantifying rhythm, but many are just a few scrapes away from contributing to and advancing the field of comparative rhythm research. Here, we present an eight-level rhythm reporting framework which details actionable steps researchers can take to report rhythm-relevant metrics. Levels fall into two categories: metric reporting and data sharing. Metric reporting levels include defining rhythm-relevant metrics, providing point estimates of temporal interval variability, reporting interval distributions, and conducting rhythm analyses. Data sharing levels are: sharing audio recordings, sharing interval durations, sharing sound element start and end times, and sharing audio recordings with sound element start/end times.
    Using sounds recorded from a sperm whale as a case study, we demonstrate how each reporting framework level can be implemented on real data. We also highlight existing best practice examples from recent research spanning multiple species. We clearly detail how engagement with our framework can be tailored case-by-case based on how much time and effort researchers are willing to contribute. Finally, we illustrate how reporting at any of the suggested levels will help advance comparative rhythm research.
    This framework will actively facilitate a comparative approach to acoustic rhythms while also promoting cooperation and data sustainability. By quantifying and reporting rhythm metrics more consistently and broadly, new avenues of inquiry and several long-standing, big picture research questions become more tractable. These lines of research can inform not only about the behavioural ecology of animals but also about the evolution of rhythm-relevant phenomena and the behavioural neuroscience of rhythm production and perception. Rhythm is clearly an emergent feature of life; adopting our framework, researchers from different fields and with different study species can help understand why.

    Additional information

    Research Data availability
  • Hersh, T. A., Ravignani, A., & Whitehead, H. (2024). Cetaceans are the next frontier for vocal rhythm research. Proceedings of the National Academy of Sciences of the United States of America, 121(25): e2313093121. doi:10.1073/pnas.2313093121.

    Abstract

    While rhythm can facilitate and enhance many aspects of behavior, its evolutionary trajectory in vocal communication systems remains enigmatic. We can trace evolutionary processes by investigating rhythmic abilities in different species, but research to date has largely focused on songbirds and primates. We present evidence that cetaceans—whales, dolphins, and porpoises—are a missing piece of the puzzle for understanding why rhythm evolved in vocal communication systems. Cetaceans not only produce rhythmic vocalizations but also exhibit behaviors known or thought to play a role in the evolution of different features of rhythm. These behaviors include vocal learning abilities, advanced breathing control, sexually selected vocal displays, prolonged mother–infant bonds, and behavioral synchronization. The untapped comparative potential of cetaceans is further enhanced by high interspecific diversity, which generates natural ranges of vocal and social complexity for investigating various evolutionary hypotheses. We show that rhythm (particularly isochronous rhythm, when sounds are equally spaced in time) is prevalent in cetacean vocalizations but is used in different contexts by baleen and toothed whales. We also highlight key questions and research areas that will enhance understanding of vocal rhythms across taxa. By coupling an infraorder-level taxonomic assessment of vocal rhythm production with comparisons to other species, we illustrate how broadly comparative research can contribute to a more nuanced understanding of the prevalence, evolution, and possible functions of rhythm in animal communication.

    Additional information

    supporting information
  • Hill, C. (2010). [Review of the book Discourse and Grammar in Australian Languages ed. by Ilana Mushin and Brett Baker]. Studies in Language, 34(1), 215-225. doi:10.1075/sl.34.1.12hil.
  • Hintz, F., Khoe, Y. H., Strauß, A., Psomakas, A. J. A., & Holler, J. (2023). Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension. Cognitive, Affective and Behavioral Neuroscience, 23, 340-353. doi:10.3758/s13415-023-01074-8.

    Abstract

    In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.
  • Hintz, F., Voeten, C. C., & Scharenborg, O. (2023). Recognizing non-native spoken words in background noise increases interference from the native language. Psychonomic Bulletin & Review, 30, 1549-1563. doi:10.3758/s13423-022-02233-7.

    Abstract

    Listeners frequently recognize spoken words in the presence of background noise. Previous research has shown that noise reduces phoneme intelligibility and hampers spoken-word recognition—especially for non-native listeners. In the present study, we investigated how noise influences lexical competition in both the non-native and the native language, reflecting the degree to which both languages are co-activated. We recorded the eye movements of native Dutch participants as they listened to English sentences containing a target word while looking at displays containing four objects. On target-present trials, the visual referent depicting the target word was present, along with three unrelated distractors. On target-absent trials, the target object (e.g., wizard) was absent. Instead, the display contained an English competitor, overlapping with the English target in phonological onset (e.g., window), a Dutch competitor, overlapping with the English target in phonological onset (e.g., wimpel, pennant), and two unrelated distractors. Half of the sentences was masked by speech-shaped noise; the other half was presented in quiet. Compared to speech in quiet, noise delayed fixations to the target objects on target-present trials. For target-absent trials, we observed that the likelihood for fixation biases towards the English and Dutch onset competitors (over the unrelated distractors) was larger in noise than in quiet. Our data thus show that the presence of background noise increases lexical competition in the task-relevant non-native (English) and in the task-irrelevant native (Dutch) language. The latter reflects stronger interference of one’s native language during non-native spoken-word recognition under adverse conditions.

    Additional information

    table 2 target-absent items
  • Hintz, F., McQueen, J. M., & Meyer, A. S. (2024). Using psychometric network analysis to examine the components of spoken word recognition. Journal of Cognition, 7(1): 10. doi:10.5334/joc.340.

    Abstract

    Using language requires access to domain-specific linguistic representations, but also draws on domain-general cognitive skills. A key issue in current psycholinguistics is to situate linguistic processing in the network of human cognitive abilities. Here, we focused on spoken word recognition and used an individual differences approach to examine the links of scores in word recognition tasks with scores on tasks capturing effects of linguistic experience, general processing speed, working memory, and non-verbal reasoning. 281 young native speakers of Dutch completed an extensive test battery assessing these cognitive skills. We used psychometric network analysis to map out the direct links between the scores, that is, the unique variance between pairs of scores, controlling for variance shared with the other scores. The analysis revealed direct links between word recognition skills and processing speed. We discuss the implications of these results and the potential of psychometric network analysis for studying language processing and its embedding in the broader cognitive system.

    Additional information

    network analysis of dataset A and B
  • Hintz, F., & Meyer, A. S. (Eds.). (2024). Individual differences in language skills [Special Issue]. Journal of Cognition, 7(1).
  • Hintz, F., Shkaravska, O., Dijkhuis, M., Van 't Hoff, V., Huijsmans, M., Van Dongen, R. C., Voeteé, L. A., Trilsbeek, P., McQueen, J. M., & Meyer, A. S. (2024). IDLaS-NL – A platform for running customized studies on individual differences in Dutch language skills via the internet. Behavior Research Methods, 56(3), 2422-2436. doi:10.3758/s13428-023-02156-8.

    Abstract

    We introduce the Individual Differences in Language Skills (IDLaS-NL) web platform, which enables users to run studies on individual differences in Dutch language skills via the internet. IDLaS-NL consists of 35 behavioral tests, previously validated in participants aged between 18 and 30 years. The platform provides an intuitive graphical interface for users to select the tests they wish to include in their research, to divide these tests into different sessions and to determine their order. Moreover, for standardized administration the platform
    provides an application (an emulated browser) wherein the tests are run. Results can be retrieved by mouse click in the graphical interface and are provided as CSV-file output via email. Similarly, the graphical interface enables researchers to modify and delete their study configurations. IDLaS-NL is intended for researchers, clinicians, educators and in general anyone conducting fundaental research into language and general cognitive skills; it is not intended for diagnostic purposes. All platform services are free of charge. Here, we provide a
    description of its workings as well as instructions for using the platform. The IDLaS-NL platform can be accessed at www.mpi.nl/idlas-nl.
  • Hintz, F., Voeten, C. C., Dobó, D., Lukics, K. S., & Lukács, Á. (2024). The role of general cognitive skills in integrating visual and linguistic information during sentence comprehension: Individual differences across the lifespan. Scientific Reports, 14: 17797. doi:10.1038/s41598-024-68674-3.

    Abstract

    Individuals exhibit massive variability in general cognitive skills that affect language processing. This variability is partly developmental. Here, we recruited a large sample of participants (N = 487), ranging from 9 to 90 years of age, and examined the involvement of nonverbal processing speed (assessed using visual and auditory reaction time tasks) and working memory (assessed using forward and backward Digit Span tasks) in a visual world task. Participants saw two objects on the screen and heard a sentence that referred to one of them. In half of the sentences, the target object could be predicted based on verb-selectional restrictions. We observed evidence for anticipatory processing on predictable compared to non-predictable trials. Visual and auditory processing speed had main effects on sentence comprehension and facilitated predictive processing, as evidenced by an interaction. We observed only weak evidence for the involvement of working memory in predictive sentence comprehension. Age had a nonlinear main effect (younger adults responded faster than children and older adults), but it did not differentially modulate predictive and non-predictive processing, nor did it modulate the involvement of processing speed and working memory. Our results contribute to delineating the cognitive skills that are involved in language-vision interactions.

    Additional information

    supplementary information
  • De Hoop, H., Levshina, N., & Segers, M. (2023). The effect of the use of T and V pronouns in Dutch HR communication. Journal of Pragmatics, 203, 96-109. doi:10.1016/j.pragma.2022.11.017.

    Abstract

    In an online experiment among native speakers of Dutch we measured addressees' responses to emails written in the informal pronoun T or the formal pronoun V in HR communication. 172 participants (61 male, mean age 37 years) read either the V-versions or the T-versions of two invitation emails and two rejection emails by four different fictitious recruiters. After each email, participants had to score their appreciation of the company and the recruiter on five different scales each, such as The recruiter who wrote this email seems … [scale from friendly to unfriendly]. We hypothesized that (i) the V-pronoun would be more appreciated in letters of rejection, and the T-pronoun in letters of invitation, and (ii) older people would appreciate the V-pronoun more than the T-pronoun, and the other way around for younger people. Although neither of these hypotheses was supported, we did find a small effect of pronoun: Emails written in V were more highly appreciated than emails in T, irrespective of type of email (invitation or rejection), and irrespective of the participant's age, gender, and level of education. At the same time, we observed differences in the strength of this effect across different scales.
  • Hope, T. M. H., Neville, D., Talozzi, L., Foulon, C., Forkel, S. J., Thiebaut de Schotten, M., & Price, C. J. (2024). Testing the disconnectome symptom discoverer model on out-of-sample post-stroke language outcomes. Brain, 147(2), e11-e13. doi:10.1093/brain/awad352.

    Abstract

    Stroke is common, and its consequent brain damage can cause various cognitive impairments. Associations between where and how much brain lesion damage a patient has suffered, and the particular impairments that injury has caused (lesion-symptom associations) offer potentially compelling insights into how the brain implements cognition.1 A better understanding of those associations can also fill a gap in current stroke medicine by helping us to predict how individual patients might recover from post-stroke impairments.2 Most recent work in this area employs machine learning models trained with data from stroke patients whose mid-to-long-term outcomes are known.2-4 These machine learning models are tested by predicting new outcomes—typically scores on standardized tests of post-stroke impairment—for patients whose data were not used to train the model. Traditionally, these validation results have been shared in peer-reviewed publications describing the model and its training. But recently, and for the first time in this field (as far as we know), one of these pre-trained models has been made public—The Disconnectome Symptom Discoverer model (DSD) which draws its predictors from structural disconnection information inferred from stroke patients’ brain MRI.5

    Here, we test the DSD model on wholly independent data, never seen by the model authors, before they published it. Specifically, we test whether its predictive performance is just as accurate as (i.e. not significantly worse than) that reported in the original (Washington University) dataset, when predicting new patients’ outcomes at a similar time post-stroke (∼1 year post-stroke) and also in another independent sample tested later (5+ years) post-stroke. A failure to generalize the DSD model occurs if it performs significantly better in the Washington data than in our data from patients tested at a similar time point (∼1 year post-stroke). In addition, a significant decrease in predictive performance for the more chronic sample would be evidence that lesion-symptom associations differ at ∼1 year post-stroke and >5 years post-stroke.
  • Horemans, I., & Schiller, N. O. (2004). Form-priming effects in nonword naming. Brain and Language, 90(1-3), 465-469. doi:10.1016/S0093-934X(03)00457-7.

    Abstract

    Form-priming effects from sublexical (syllabic or segmental) primes in masked priming can be accounted for in two ways. One is the sublexical pre-activation view according to which segments are pre-activated by the prime, and at the time the form-related target is to be produced, retrieval/assembly of those pre-activated segments is faster compared to an unrelated situation. However, it has also been argued that form-priming effects from sublexical primes might be due to lexical pre-activation. When the sublexical prime is presented, it activates all form-related words (i.e., cohorts) in the lexicon, necessarily including the form-related target, which—as a consequence—is produced faster than in the unrelated case. Note, however, that this lexical pre-activation account makes previous pre-lexical activation of segments necessary. This study reports a nonword naming experiment to investigate whether or not sublexical pre-activation is involved in masked form priming with sublexical primes. The results demonstrated a priming effect suggesting a nonlexical effect. However, this does not exclude an additional lexical component in form priming.
  • Horton, S., Jackson, V., Boyce, J., Franken, M.-C., Siemers, S., St John, M., Hearps, S., Van Reyk, O., Braden, R., Parker, R., Vogel, A. P., Eising, E., Amor, D. J., Irvine, J., Fisher, S. E., Martin, N. G., Reilly, S., Bahlo, M., Scheffer, I., & Morgan, A. (2023). Self-reported stuttering severity is accurate: Informing methods for large-scale data collection in stuttering. Journal of Speech, Language, and Hearing Research. Advance online publication. doi:10.1044/2023_JSLHR-23-00081.

    Abstract

    Purpose:
    To our knowledge, there are no data examining the agreement between self-reported and clinician-rated stuttering severity. In the era of big data, self-reported ratings have great potential utility for large-scale data collection, where cost and time preclude in-depth assessment by a clinician. Equally, there is increasing emphasis on the need to recognize an individual's experience of their own condition. Here, we examined the agreement between self-reported stuttering severity compared to clinician ratings during a speech assessment. As a secondary objective, we determined whether self-reported stuttering severity correlated with an individual's subjective impact of stuttering.

    Method:
    Speech-language pathologists conducted face-to-face speech assessments with 195 participants (137 males) aged 5–84 years, recruited from a cohort of people with self-reported stuttering. Stuttering severity was rated on a 10-point scale by the participant and by two speech-language pathologists. Participants also completed the Overall Assessment of the Subjective Experience of Stuttering (OASES). Clinician and participant ratings were compared. The association between stuttering severity and the OASES scores was examined.

    Results:
    There was a strong positive correlation between speech-language pathologist and participant-reported ratings of stuttering severity. Participant-reported stuttering severity correlated weakly with the four OASES domains and with the OASES overall impact score.

    Conclusions:
    Participants were able to accurately rate their stuttering severity during a speech assessment using a simple one-item question. This finding indicates that self-report stuttering severity is a suitable method for large-scale data collection. Findings also support the collection of self-report subjective experience data using questionnaires, such as the OASES, which add vital information about the participants' experience of stuttering that is not captured by overt speech severity ratings alone.
  • Howarth, H., Sommer, V., & Jordan, F. (2010). Visual depictions of female genitalia differ depending on source. Medical Humanities, 36, 75-79. doi:10.1136/jmh.2009.003707.

    Abstract

    Very little research has attempted to describe normal human variation in female genitalia, and no studies have compared the visual images that women might use in constructing their ideas of average and acceptable genital morphology to see if there are any systematic differences. Our objective was to determine if visual depictions of the vulva differed according to their source so as to alert medical professionals and their patients to how these depictions might capture variation and thus influence perceptions of "normality". We conducted a comparative analysis by measuring (a) published visual materials from human anatomy textbooks in a university library, (b) feminist publications (both print and online) depicting vulval morphology, and (c) online pornography, focusing on the most visited and freely accessible sites in the UK. Post-hoc tests showed that labial protuberance was significantly less (p < .001, equivalent to approximately 7 mm) in images from online pornography compared to feminist publications. All five measures taken of vulval features were significantly correlated (p < .001) in the online pornography sample, indicating a less varied range of differences in organ proportions than the other sources where not all measures were correlated. Women and health professionals should be aware that specific sources of imagery may depict different types of genital morphology and may not accurately reflect true variation in the population, and consultations for genital surgeries should include discussion about the actual and perceived range of variation in female genital morphology.
  • Hoymann, G. (2004). [Review of the book Botswana: The future of the minority languages ed. by Herman M. Batibo and Birgit Smieja]. Journal of African Languages and Linguistics, 25(2), 171-173. doi:10.1515/jall.2004.25.2.171.
  • Hoymann, G. (2010). Questions and responses in ╪Ākhoe Hai||om. Journal of Pragmatics, 42(10), 2726-2740. doi:10.1016/j.pragma.2010.04.008.

    Abstract

    This paper examines ╪Ākhoe Hai||om, a Khoe language of the Khoisan family spoken in Northern Namibia. I document the way questions are posed in natural conversation, the actions the questions are used for and the manner in which they are responded to. I show that in this language speakers rely most heavily on content questions. I also find that speakers of ╪Ākhoe Hai||om address fewer questions to a specific individual than would be expected from prior research on Indo European languages. Finally, I discuss some possible explanations for these findings.
  • De Hoyos, L., Barendse, M. T., Schlag, F., Van Donkelaar, M. M. J., Verhoef, E., Shapland, C. Y., Klassmann, A., Buitelaar, J., Verhulst, B., Fisher, S. E., Rai, D., & St Pourcain, B. (2024). Structural models of genome-wide covariance identify multiple common dimensions in autism. Nature Communications, 15: 1770. doi:10.1038/s41467-024-46128-8.

    Abstract

    Common genetic variation has been associated with multiple symptoms in Autism Spectrum Disorder (ASD). However, our knowledge of shared genetic factor structures contributing to this highly heterogeneous neurodevelopmental condition is limited. Here, we developed a structural equation modelling framework to directly model genome-wide covariance across core and non-core ASD phenotypes, studying autistic individuals of European descent using a case-only design. We identified three independent genetic factors most strongly linked to language/cognition, behaviour and motor development, respectively, when studying a population-representative sample (N=5,331). These analyses revealed novel associations. For example, developmental delay in acquiring personal-social skills was inversely related to language, while developmental motor delay was linked to self-injurious behaviour. We largely confirmed the three-factorial structure in independent ASD-simplex families (N=1,946), but uncovered simplex-specific genetic overlap between behaviour and language phenotypes. Thus, the common genetic architecture in ASD is multi-dimensional and contributes, in combination with ascertainment-specific patterns, to phenotypic heterogeneity.
  • Huettig, F., & Altmann, G. T. M. (2005). Word meaning and the control of eye fixation: Semantic competitor effects and the visual world paradigm. Cognition, 96(1), B23-B32. doi:10.1016/j.cognition.2004.10.003.

    Abstract

    When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84–107; Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632–1634]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word ‘piano’ when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word ‘piano’ unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.
  • Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2010). Do language-specific categories shape conceptual processing? Mandarin classifier distinctions influence eye gaze behavior, but only during linguistic processing. Journal of Cognition and Culture, 10(1/2), 39-58. doi:10.1163/156853710X497167.

    Abstract

    In two eye-tracking studies we investigated the influence of Mandarin numeral classifiers - a grammatical category in the language - on online overt attention. Mandarin speakers were presented with simple sentences through headphones while their eye-movements to objects presented on a computer screen were monitored. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence Mandarin speakers' general conceptual processing, then on hearing the target noun they should look at objects that are members of the same classifier category - even when the classifier is not explicitly present (cf. Huettig & Altmann, 2005). The data show that when participants heard a classifier (e.g., ba3, Experiment 1) they shifted overt attention significantly more to classifier-match objects (e.g., chair) than to distractor objects. But when the classifier was not explicitly presented in speech, overt attention to classifier-match objects and distractor objects did not differ (Experiment 2). This suggests that although classifier distinctions do influence eye-gaze behavior, they do so only during linguistic processing of that distinction and not in moment-to-moment general conceptual processing.
  • Huettig, F., & Hartsuiker, R. J. (2010). Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception. Language and Cognitive Processes, 3, 347 -374. doi:10.1080/01690960903046926.

    Abstract

    Theories of verbal self-monitoring generally assume an internal (pre-articulatory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception-based theories predict that listening to one's own inner speech has similar behavioral consequences as listening to someone else's speech. Our experiment therefore registered eye-movements while speakers named objects accompanied by phonologically related or unrelated written words. The data showed that listening to one's own speech drives eye-movements to phonologically related words, just as listening to someone else's speech does in perception experiments. The time-course of these eye-movements was very similar to that in other-perception (starting 300 ms post-articulation), which demonstrates that these eye-movements were driven by the perception of overt speech, not inner speech. We conclude that external, but not internal monitoring, is based on speech perception.
  • Huettig, F., Voeten, C. C., Pascual, E., Liang, J., & Hintz, F. (2023). Do autistic children differ in language-mediated prediction? Cognition, 239: 105571. doi:10.1016/j.cognition.2023.105571.

    Abstract

    Prediction appears to be an important characteristic of the human mind. It has also been suggested that prediction is a core difference of autistic children. Past research exploring language-mediated anticipatory eye movements in autistic children, however, has been somewhat contradictory, with some studies finding normal anticipatory processing in autistic children with low levels of autistic traits but others observing weaker prediction effects in autistic children with less receptive language skills. Here we investigated language-mediated anticipatory eye movements in young children who differed in the severity of their level of autistic traits and were in professional institutional care in Hangzhou, China. We chose the same spoken sentences (translated into Mandarin Chinese) and visual stimuli as a previous study which observed robust prediction effects in young children (Mani & Huettig, 2012) and included a control group of typically-developing children. Typically developing but not autistic children showed robust prediction effects. Most interestingly, autistic children with lower communication, motor, and (adaptive) behavior scores exhibited both less predictive and non-predictive visual attention behavior. Our results raise the possibility that differences in language-mediated anticipatory eye movements in autistic children with higher levels of autistic traits may be differences in visual attention in disguise, a hypothesis that needs further investigation.
  • Huettig, F., & Ferreira, F. (2023). The myth of normal reading. Perspectives on Psychological Science, 18(4), 863-870. doi:10.1177/17456916221127226.

    Abstract

    We argue that the educational and psychological sciences must embrace the diversity of reading rather than chase the phantom of normal reading behavior. We critically discuss the research practice of asking participants in experiments to read “normally”. We then draw attention to the large cross-cultural and linguistic diversity around the world and consider the enormous diversity of reading situations and goals. Finally, we observe that people bring a huge diversity of brains and experiences to the reading task. This leads to certain implications. First, there are important lessons for how to conduct psycholinguistic experiments. Second, we need to move beyond Anglo-centric reading research and produce models of reading that reflect the large cross-cultural diversity of languages and types of writing systems. Third, we must acknowledge that there are multiple ways of reading and reasons for reading, and none of them is normal or better or a “gold standard”. Finally, we must stop stigmatizing individuals who read differently and for different reasons, and there should be increased focus on teaching the ability to extract information relevant to the person’s goals. What is important is not how well people decode written language and how fast people read but what people comprehend given their own stated goals.
  • Huettig, F., & Hulstijn, J. (2024). The Enhanced Literate Mind Hypothesis. Topics in Cognitive Science. Advance online publication. doi:10.1111/tops.12731.

    Abstract

    In the present paper we describe the Enhanced Literate Mind (ELM) hypothesis. As individuals learn to read and write, they are, from then on, exposed to extensive written-language input and become literate. We propose that acquisition and proficient processing of written language (‘literacy’) leads to, both, increased language knowledge as well as enhanced language and non-language (perceptual and cognitive) skills. We also suggest that all neurotypical native language users, including illiterate, low literate, and high literate individuals, share a Basic Language Cognition (BLC) in the domain of oral informal language. Finally, we discuss the possibility that the acquisition of ELM leads to some degree of ‘knowledge parallelism’ between BLC and ELM in literate language users, which has implications for empirical research on individual and situational differences in spoken language processing.
  • Huettig, F., & Christiansen, M. H. (2024). Can large language models counter the recent decline in literacy levels? An important role for cognitive science. Cognitive Science, 48(8): e13487. doi:10.1111/cogs.13487.

    Abstract

    Literacy is in decline in many parts of the world, accompanied by drops in associated cognitive skills (including IQ) and an increasing susceptibility to fake news. It is possible that the recent explosive growth and widespread deployment of Large Language Models (LLMs) might exacerbate this trend, but there is also a chance that LLMs can help turn things around. We argue that cognitive science is ideally suited to help steer future literacy development in the right direction by challenging and informing current educational practices and policy. Cognitive scientists have the right interdisciplinary skills to study, analyze, evaluate, and change LLMs to facilitate their critical use, to encourage turn-taking that promotes rather than hinders literacy, to support literacy acquisition in diverse and equitable ways, and to scaffold potential future changes in what it means to be literate. We urge cognitive scientists to take up this mantle—the future impact of LLMs on human literacy skills is too important to be left to the large, predominately US-based tech companies.
  • Huisman, J. L. A., Van Hout, R., & Majid, A. (2023). Cross-linguistic constraints and lineage-specific developments in the semantics of cutting and breaking in Japonic and Germanic. Linguistic Typology, 27(1), 41-75. doi:10.1515/lingty-2021-2090.

    Abstract

    Semantic variation in the cutting and breaking domain has been shown to be constrained across languages in a previous typological study, but it was unclear whether Japanese was an outlier in this domain. Here we revisit cutting and breaking in the Japonic language area by collecting new naming data for 40 videoclips depicting cutting and breaking events in Standard Japanese, the highly divergent Tohoku dialects, as well as four related Ryukyuan languages (Amami, Okinawa, Miyako and Yaeyama). We find that the Japonic languages recapitulate the same semantic dimensions attested in the previous typological study, confirming that semantic variation in the domain of cutting and breaking is indeed cross-linguistically constrained. We then compare our new Japonic data to previously collected Germanic data and find that, in general, related languages resemble each other more than unrelated languages, and that the Japonic languages resemble each other more than the Germanic languages do. Nevertheless, English resembles all of the Japonic languages more than it resembles Swedish. Together, these findings show that the rate and extent of semantic change can differ between language families, indicating the existence of lineage-specific developments on top of universal cross-linguistic constraints.
  • Huizeling, E., Alday, P. M., Peeters, D., & Hagoort, P. (2023). Combining EEG and 3D-eye-tracking to study the prediction of upcoming speech in naturalistic virtual environments: A proof of principle. Neuropsychologia, 191: 108730. doi:10.1016/j.neuropsychologia.2023.108730.

    Abstract

    EEG and eye-tracking provide complementary information when investigating language comprehension. Evidence that speech processing may be facilitated by speech prediction comes from the observation that a listener's eye gaze moves towards a referent before it is mentioned if the remainder of the spoken sentence is predictable. However, changes to the trajectory of anticipatory fixations could result from a change in prediction or an attention shift. Conversely, N400 amplitudes and concurrent spectral power provide information about the ease of word processing the moment the word is perceived. In a proof-of-principle investigation, we combined EEG and eye-tracking to study linguistic prediction in naturalistic, virtual environments. We observed increased processing, reflected in theta band power, either during verb processing - when the verb was predictive of the noun - or during noun processing - when the verb was not predictive of the noun. Alpha power was higher in response to the predictive verb and unpredictable nouns. We replicated typical effects of noun congruence but not predictability on the N400 in response to the noun. Thus, the rich visual context that accompanied speech in virtual reality influenced language processing compared to previous reports, where the visual context may have facilitated processing of unpredictable nouns. Finally, anticipatory fixations were predictive of spectral power during noun processing and the length of time fixating the target could be predicted by spectral power at verb onset, conditional on the object having been fixated. Overall, we show that combining EEG and eye-tracking provides a promising new method to answer novel research questions about the prediction of upcoming linguistic input, for example, regarding the role of extralinguistic cues in prediction during language comprehension.
  • Hulten, A., Laaksonen, H., Vihla, M., Laine, M., & Salmelin, R. (2010). Modulation of brain activity after learning predicts long-term memory for words. Journal of Neuroscience, 30(45), 15160-15164. doi:10.1523/​JNEUROSCI.1278-10.2010.

    Abstract

    The acquisition and maintenance of new language information, such as picking up new words, is a critical human ability that is needed throughout the life span. Most likely you learned the word “blog” quite recently as an adult, whereas the word “kipe,” which in the 1970s denoted stealing, now seems unfamiliar. Brain mechanisms underlying the long-term maintenance of new words have remained unknown, albeit they could provide important clues to the considerable individual differences in the ability to remember words. After successful training of a set of novel object names we tracked, over a period of 10 months, the maintenance of this new vocabulary in 10 human participants by repeated behavioral tests and magnetoencephalography measurements of overt picture naming. When namingrelated activation in the left frontal and temporal cortex was enhanced 1 week after training, compared with the level at the end of training, the individual retained a good command of the new vocabulary at 10 months; vice versa, individuals with reduced activation at 1 week posttraining were less successful in recalling the names at 10 months. This finding suggests an individual neural marker for memory, in the context of language. Learning is not over when the acquisition phase has been successfully completed: neural events during the access to recently established word representations appear to be important for the long-term outcome of learning.
  • Hustá, C., Nieuwland, M. S., & Meyer, A. S. (2023). Effects of picture naming and categorization on concurrent comprehension: Evidence from the N400. Collabra: Psychology, 9(1): 88129. doi:10.1525/collabra.88129.

    Abstract

    n conversations, interlocutors concurrently perform two related processes: speech comprehension and speech planning. We investigated effects of speech planning on comprehension using EEG. Dutch speakers listened to sentences that ended with expected or unexpected target words. In addition, a picture was presented two seconds after target onset (Experiment 1) or 50 ms before target onset (Experiment 2). Participants’ task was to name the picture or to stay quiet depending on the picture category. In Experiment 1, we found a strong N400 effect in response to unexpected compared to expected target words. Importantly, this N400 effect was reduced in Experiment 2 compared to Experiment 1. Unexpectedly, the N400 effect was not smaller in the naming compared to categorization condition. This indicates that conceptual preparation or the decision whether to speak (taking place in both task conditions of Experiment 2) rather than processes specific to word planning interfere with comprehension.
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P., & Gullberg, M. (2010). Foreword. Language Learning, 60(S2), v. doi:10.1111/j.1467-9922.2010.00596.x.

    Abstract

    The articles in this volume are the result of an invited conference entitled "The Earliest Stages of Language Learning" held at the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, in October 2009.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. Language Learning, 60(S2), 1-4. doi:10.1111/j.1467-9922.2010.00597.x.
  • Ingason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J. and 20 moreIngason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Walshe, M., Vassos, E., Di Forti, M., Murray, R., Bonetto, C., Tosato, S., Cantor, R. M., Rietschel, M., Craddock, N., Owen, M. J., Andreassen, O. A., Nothen, M. M., Peltonen, L., St. Clair, D., Ophoff, R. A., O’Donovan, M. C., Collier, D. A., Werge, T., & Rujescu, D. (2010). A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia. Human Molecular Genetics, 19(7), 1379-1386. doi:10.1093/hmg/ddq009.

    Abstract

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample. Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both the replication study of new samples and the meta-analysis show evidence for significant overrepresentation of all tested alleles in patients compared with controls (meta-analysis; P = 8.2 x 10(-5)-1.7 x 10(-3), common OR = 1.09-1.11). The region contains two genes, AHI1 and C6orf217, and both genes-as well as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia.
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Jackson, C., & Roberts, L. (2010). Animacy affects the processing of subject–object ambiguities in the second language: Evidence from self-paced reading with German second language learners of Dutch. Applied Psycholinguistics, 31(4), 671-691. doi:10.1017/S0142716410000196.

    Abstract

    The results of a self-paced reading study with German second language (L2) learners of Dutch showed that noun animacy affected the learners' on-line commitments when comprehending relative clauses in their L2. Earlier research has found that German L2 learners of Dutch do not show an on-line preference for subject–object word order in temporarily ambiguous relative clauses when no disambiguating material is available prior to the auxiliary verb. We investigated whether manipulating the animacy of the ambiguous noun phrases would push the learners to make an on-line commitment to either a subject- or object-first analysis. Results showed they performed like Dutch native speakers in that their reading times reflected an interaction between topichood and animacy in the on-line assignment of grammatical roles
  • Jadoul, Y., & Ravignani, A. (2023). Modelling the emergence of synchrony from decentralized rhythmic interactions in animal communication. Proceedings of the Royal Society B: Biological Sciences, 290(2003). doi:10.1098/rspb.2023.0876.

    Abstract

    To communicate, an animal's strategic timing of rhythmic signals is crucial. Evolutionary, game-theoretical, and dynamical systems models can shed light on the interaction between individuals and the associated costs and benefits of signalling at a specific time. Mathematical models that study rhythmic interactions from a strategic or evolutionary perspective are rare in animal communication research. But new inspiration may come from a recent game theory model of how group synchrony emerges from local interactions of oscillatory neurons. In the study, the authors analyse when the benefit of joint synchronization outweighs the cost of individual neurons sending electrical signals to each other. They postulate there is a benefit for pairs of neurons to fire together and a cost for a neuron to communicate. The resulting model delivers a variant of a classical dynamical system, the Kuramoto model. Here, we present an accessible overview of the Kuramoto model and evolutionary game theory, and of the 'oscillatory neurons' model. We interpret the model's results and discuss the advantages and limitations of using this particular model in the context of animal rhythmic communication. Finally, we sketch potential future directions and discuss the need to further combine evolutionary dynamics, game theory and rhythmic processes in animal communication studies.
  • Jadoul, Y., Düngen, D., & Ravignani, A. (2023). PyGellermann: a Python tool to generate pseudorandom series for human and non-human animal behavioural experiments. BMC Research Notes, 16: 135. doi:10.1186/s13104-023-06396-x.

    Abstract

    Objective

    Researchers in animal cognition, psychophysics, and experimental psychology need to randomise the presentation order of trials in experimental sessions. In many paradigms, for each trial, one of two responses can be correct, and the trials need to be ordered such that the participant’s responses are a fair assessment of their performance. Specifically, in some cases, especially for low numbers of trials, randomised trial orders need to be excluded if they contain simple patterns which a participant could accidentally match and so succeed at the task without learning.
    Results

    We present and distribute a simple Python software package and tool to produce pseudorandom sequences following the Gellermann series. This series has been proposed to pre-empt simple heuristics and avoid inflated performance rates via false positive responses. Our tool allows users to choose the sequence length and outputs a .csv file with newly and randomly generated sequences. This allows behavioural researchers to produce, in a few seconds, a pseudorandom sequence for their specific experiment. PyGellermann is available at https://github.com/YannickJadoul/PyGellermann.
  • Jadoul, Y., De Boer, B., & Ravignani, A. (2024). Parselmouth for bioacoustics: Automated acoustic analysis in Python. Bioacoustics, 33(1), 1-19. doi:10.1080/09524622.2023.2259327.

    Abstract

    Bioacoustics increasingly relies on large datasets and computational methods. The need to batch-process large amounts of data and the increased focus on algorithmic processing require software tools. To optimally assist in a bioacoustician’s workflow, software tools need to be as simple and effective as possible. Five years ago, the Python package Parselmouth was released to provide easy and intuitive access to all functionality in the Praat software. Whereas Praat is principally designed for phonetics and speech processing, plenty of bioacoustics studies have used its advanced acoustic algorithms. Here, we evaluate existing usage of Parselmouth and discuss in detail several studies which used the software library. We argue that Parselmouth has the potential to be used even more in bioacoustics research, and suggest future directions to be pursued with the help of Parselmouth.
  • Jago, L. S., Alcock, K., Meints, K., Pine, J. M., & Rowland, C. F. (2023). Language outcomes from the UK-CDI Project: Can risk factors, vocabulary skills and gesture scores in infancy predict later language disorders or concern for language development? Frontiers in Psychology, 14: 1167810. doi:10.3389/fpsyg.2023.1167810.

    Abstract

    At the group level, children exposed to certain health and demographic risk factors, and who have delayed language in early childhood are, more likely to have language problems later in childhood. However, it is unclear whether we can use these risk factors to predict whether an individual child is likely to develop problems with language (e.g., be diagnosed with a developmental language disorder). We tested this in a sample of 146 children who took part in the UK-CDI norming project. When the children were 15–18 months old, 1,210 British parents completed: (a) the UK-CDI (a detailed assessment of vocabulary and gesture use) and (b) the Family Questionnaire (questions about health and demographic risk factors). When the children were between 4 and 6  years, 146 of the same parents completed a short questionnaire that assessed (a) whether children had been diagnosed with a disability that was likely to affect language proficiency (e.g., developmental disability, language disorder, hearing impairment), but (b) also yielded a broader measure: whether the child’s language had raised any concern, either by a parent or professional. Discriminant function analyses were used to assess whether we could use different combinations of 10 risk factors, together with early vocabulary and gesture scores, to identify children (a) who had developed a language-related disability by the age of 4–6 years (20 children, 13.70% of the sample) or (b) for whom concern about language had been expressed (49 children; 33.56%). The overall accuracy of the models, and the specificity scores were high, indicating that the measures correctly identified those children without a language-related disability and whose language was not of concern. However, sensitivity scores were low, indicating that the models could not identify those children who were diagnosed with a language-related disability or whose language was of concern. Several exploratory analyses were carried out to analyse these results further. Overall, the results suggest that it is difficult to use parent reports of early risk factors and language in the first 2 years of life to predict which children are likely to be diagnosed with a language-related disability. Possible reasons for this are discussed.

    Additional information

    follow up questionnaire table S1
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E., De Bree, E., & Brouwer, S. (2010). Decreased sensitivity to phonemic mismatch in spoken word processing in adult developmental dyslexia. Journal of Psycholinguistic Research, 39(6), 523-539. doi:10.1007/s10936-010-9150-2.

    Abstract

    Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as procodile for crocodile) for the atypical population of dyslexic adults to see to what extent mismatching phonemic information affects lexical activation in the face of overwhelming support for one specific lexical candidate. Results of an auditory lexical decision task showed that sensitivity to phonemic mismatch was less in the dyslexic population, compared to the respective control group. However, the dyslexic participants were outperformed by their controls only for word-initial mismatches. It is argued that a subtle speech decoding deficit affects lexical activation levels and makes spoken word processing less robust against distortion.
  • Janse, E. (2005). Neighbourhood density effects in auditory nonword processing in aphasia. Brain and Language, 95, 24-25. doi:10.1016/j.bandl.2005.07.027.
  • Janse, E. (2010). Spoken word processing and the effect of phonemic mismatch in aphasia. Aphasiology, 24(1), 3-27. doi:10.1080/02687030802339997.

    Abstract

    Background: There is evidence that, unlike in typical populations, initial lexical activation upon hearing spoken words in aphasic patients is not a direct reflection of the goodness of fit between the presented stimulus and the intended target. Earlier studies have mainly used short monosyllabic target words. Short words are relatively difficult to recognise because they are not highly redundant: changing one phoneme will often result in a (similar-sounding) different word. Aims: The present study aimed to investigate sensitivity of the lexical recognition system in aphasia. The focus was on longer words that contain more redundancy, to investigate whether aphasic adults might be impaired in deactivation of strongly activated lexical candidates. This was done by studying lexical activation upon presentation of spoken polysyllabic pseudowords (such as procodile) to see to what extent mismatching phonemic information leads to deactivation in the face of overwhelming support for one specific lexical candidate. Methods & Procedures: Speeded auditory lexical decision was used to investigate response time and accuracy to pseudowords with a word-initial or word-final phonemic mismatch in 21 aphasic patients and in an age-matched control group. Outcomes & Results: Results of an auditory lexical decision task showed that aphasic participants were less sensitive to phonemic mismatch if there was strong evidence for one particular lexical candidate, compared to the control group. Classifications of patients as Broca's vs Wernicke's or as fluent vs non-fluent did not reveal differences in sensitivity to mismatch between aphasia types. There was no reliable relationship between measures of auditory verbal short-term memory and lexical decision performance. Conclusions: It is argued that the aphasic results can best be viewed as lexical “overactivation” and that a verbal short-term memory account is less appropriate.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Jansen, M. G., Zwiers, M. P., Marques, J. P., Chan, K.-S., Amelink, J., Altgassen, M., Oosterman, J. M., & Norris, D. G. (2024). The Advanced BRain Imaging on ageing and Memory (ABRIM) data collection: Study protocol and rationale. PLOS ONE, 19(6): e0306006. doi:10.1371/journal.pone.0306006.

    Abstract

    To understand the neurocognitive mechanisms that underlie heterogeneity in cognitive ageing, recent scientific efforts have led to a growing public availability of imaging cohort data. The Advanced BRain Imaging on ageing and Memory (ABRIM) project aims to add to these existing datasets by taking an adult lifespan approach to provide a cross-sectional, normative database with a particular focus on connectivity, myelinization and iron content of the brain in concurrence with cognitive functioning, mechanisms of reserve, and sleep-wake rhythms. ABRIM freely shares MRI and behavioural data from 295 participants between 18–80 years, stratified by age decade and sex (median age 52, IQR 36–66, 53.20% females). The ABRIM MRI collection consists of both the raw and pre-processed structural and functional MRI data to facilitate data usage among both expert and non-expert users. The ABRIM behavioural collection includes measures of cognitive functioning (i.e., global cognition, processing speed, executive functions, and memory), proxy measures of cognitive reserve (e.g., educational attainment, verbal intelligence, and occupational complexity), and various self-reported questionnaires (e.g., on depressive symptoms, pain, and the use of memory strategies in daily life and during a memory task). In a sub-sample (n = 120), we recorded sleep-wake rhythms using an actigraphy device (Actiwatch 2, Philips Respironics) for a period of 7 consecutive days. Here, we provide an in-depth description of our study protocol, pre-processing pipelines, and data availability. ABRIM provides a cross-sectional database on healthy participants throughout the adult lifespan, including numerous parameters relevant to improve our understanding of cognitive ageing. Therefore, ABRIM enables researchers to model the advanced imaging parameters and cognitive topologies as a function of age, identify the normal range of values of such parameters, and to further investigate the diverse mechanisms of reserve and resilience.
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Janzen, G., & Hawlik, M. (2005). Orientierung im Raum: Befunde zu Entscheidungspunkten. Zeitschrift für Psychology, 213, 179-186.
  • Jara-Ettinger, J., & Rubio-Fernandez, P. (2024). Demonstratives as attention tools: Evidence of mentalistic representations in language. Proceedings of the National Academy of Sciences of the United States of America, 121(32): e2402068121. doi:10.1073/pnas.2402068121.

    Abstract

    Linguistic communication is an intrinsically social activity that enables us to share thoughts across minds. Many complex social uses of language can be captured by domain-general representations of other minds (i.e., mentalistic representations) that externally modulate linguistic meaning through Gricean reasoning. However, here we show that representations of others’ attention are embedded within language itself. Across ten languages, we show that demonstratives—basic grammatical words (e.g.,“this”/“that”) which are evolutionarily ancient, learned early in life, and documented in all known languages—are intrinsic attention tools. Beyond their spatial meanings, demonstratives encode both joint attention and the direction in which the listenermmust turn to establish it. Crucially, the frequency of the spatial and attentional uses of demonstratives varies across languages, suggesting that both spatial and mentalistic representations are part of their conventional meaning. Using computational modeling, we show that mentalistic representations of others’ attention are internally encoded in demonstratives, with their effect further boosted by Gricean reasoning. Yet, speakers are largely unaware of this, incorrectly reporting that they primarily capture spatial representations. Our findings show that representations of other people’s cognitive states (namely, their attention) are embedded in language and suggest that the most basic building blocks of the linguistic system crucially rely on social cognition.

    Additional information

    pnas.2402068121.sapp.pdf
  • Järvikivi, J., Vainio, M., & Aalto, D. (2010). Real-time correlates of phonological quantity reveal unity of tonal and non-tonal languages. Plos One, 5(9), e12603. doi:10.1371/journal.pone.0012603.

    Abstract

    Discrete phonological phenomena form our conscious experience of language: continuous changes in pitch appear as distinct tones to the speakers of tone languages, whereas the speakers of quantity languages experience duration categorically. The categorical nature of our linguistic experience is directly reflected in the traditionally clear-cut linguistic classification of languages into tonal or non-tonal. However, some evidence suggests that duration and pitch are fundamentally interconnected and co-vary in signaling word meaning in non-tonal languages as well. We show that pitch information affects real-time language processing in a (non-tonal) quantity language. The results suggest that there is no unidirectional causal link from a genetically-based perceptual sensitivity towards pitch information to the appearance of a tone language. They further suggest that the contrastive categories tone and quantity may be based on simultaneously co-varying properties of the speech signal and the processing system, even though the conscious experience of the speakers may highlight only one discrete variable at a time.
  • Jesse, A., & Massaro, D. W. (2010). Seeing a singer helps comprehension of the song's lyrics. Psychonomic Bulletin & Review, 17, 323-328.

    Abstract

    When listening to speech, we often benefit when also seeing the speaker talk. If this benefit is not domain-specific for speech, then the recognition of sung lyrics should likewise benefit from seeing the singer. Nevertheless, previous research failed to obtain a substantial improvement in that domain. Our study shows that this failure was not due to inherent differences between singing and speaking but rather to less informative visual presentations. By presenting a professional singer, we found a substantial audiovisual benefit of about 35% improvement for lyrics recognition. This benefit was further robust across participants, phrases, and repetition of the test materials. Our results provide the first evidence that lyrics recognition just like speech and music perception is a multimodal process.
  • Jesse, A., & Massaro, D. W. (2010). The temporal distribution of information in audiovisual spoken-word identification. Attention, Perception & Psychophysics, 72(1), 209-225. doi:10.3758/APP.72.1.209.

    Abstract

    In the present study, we examined the distribution and processing of information over time in auditory and visual speech as it is used in unimodal and bimodal word recognition. English consonant—vowel—consonant words representing all possible initial consonants were presented as auditory, visual, or audiovisual speech in a gating task. The distribution of information over time varied across and within features. Visual speech information was generally fully available early during the phoneme, whereas auditory information was still accumulated. An audiovisual benefit was therefore already found early during the phoneme. The nature of the audiovisual recognition benefit changed, however, as more of the phoneme was presented. More features benefited at short gates rather than at longer ones. Visual speech information plays, therefore, a more important role early during the phoneme rather than later. The results of the study showed the complex interplay of information across modalities and time, since this is essential in determining the time course of audiovisual spoken-word recognition.
  • Jin, H., Wang, Q., Yang, Y.-F., Zhang, H., Gao, M. (., Jin, S., Chen, Y. (., Xu, T., Zheng, Y.-R., Chen, J., Xiao, Q., Yang, J., Wang, X., Geng, H., Ge, J., Wang, W.-W., Chen, X., Zhang, L., Zuo, X.-N., & Chuan-Peng, H. (2023). The Chinese Open Science Network (COSN): Building an open science community from scratch. Advances in Methods and Practices in Psychological Science, 6(1): 10.1177/25152459221144986. doi:10.1177/25152459221144986.

    Abstract

    Open Science is becoming a mainstream scientific ideology in psychology and related fields. However, researchers, especially early-career researchers (ECRs) in developing countries, are facing significant hurdles in engaging in Open Science and moving it forward. In China, various societal and cultural factors discourage ECRs from participating in Open Science, such as the lack of dedicated communication channels and the norm of modesty. To make the voice of Open Science heard by Chinese-speaking ECRs and scholars at large, the Chinese Open Science Network (COSN) was initiated in 2016. With its core values being grassroots-oriented, diversity, and inclusivity, COSN has grown from a small Open Science interest group to a recognized network both in the Chinese-speaking research community and the international Open Science community. So far, COSN has organized three in-person workshops, 12 tutorials, 48 talks, and 55 journal club sessions and translated 15 Open Science-related articles and blogs from English to Chinese. Currently, the main social media account of COSN (i.e., the WeChat Official Account) has more than 23,000 subscribers, and more than 1,000 researchers/students actively participate in the discussions on Open Science. In this article, we share our experience in building such a network to encourage ECRs in developing countries to start their own Open Science initiatives and engage in the global Open Science movement. We foresee great collaborative efforts of COSN together with all other local and international networks to further accelerate the Open Science movement.
  • Jodzio, A., Piai, V., Verhagen, L., Cameron, I., & Indefrey, P. (2023). Validity of chronometric TMS for probing the time-course of word production: A modified replication. Cerebral Cortex, 33(12), 7816-7829. doi:10.1093/cercor/bhad081.

    Abstract

    In the present study, we used chronometric TMS to probe the time-course of 3 brain regions during a picture naming task. The left inferior frontal gyrus, left posterior middle temporal gyrus, and left posterior superior temporal gyrus were all separately stimulated in 1 of 5 time-windows (225, 300, 375, 450, and 525 ms) from picture onset. We found posterior temporal areas to be causally involved in picture naming in earlier time-windows, whereas all 3 regions appear to be involved in the later time-windows. However, chronometric TMS produces nonspecific effects that may impact behavior, and furthermore, the time-course of any given process is a product of both the involved processing stages along with individual variation in the duration of each stage. We therefore extend previous work in the field by accounting for both individual variations in naming latencies and directly testing for nonspecific effects of TMS. Our findings reveal that both factors influence behavioral outcomes at the group level, underlining the importance of accounting for individual variations in naming latencies, especially for late processing stages closer to articulation, and recognizing the presence of nonspecific effects of TMS. The paper advances key considerations and avenues for future work using chronometric TMS to study overt production.
  • Johnson, E. K. (2005). English-learning infants' representations of word-forms with iambic stress. Infancy, 7(1), 95-105. doi:10.1207/s15327078in0701_8.

    Abstract

    Retaining detailed representations of unstressed syllables is a logical prerequisite for infants' use of probabilistic phonotactics to segment iambic words from fluent speech. The head-turn preference study was used to investigate the nature of English- learners' representations of iambic word onsets. Fifty-four 10.5-month-olds were familiarized to passages containing the nonsense iambic word forms ginome and tupong. Following familiarization, infants were either tested on familiar (ginome and tupong) or near-familiar (pinome and bupong) versus unfamiliar (kidar and mafoos) words. Infants in the familiar test group (familiar vs. unfamiliar) oriented significantly longer to familiar than unfamiliar test items, whereas infants in the near-familiar test group (near-familiar vs. unfamiliar) oriented equally long to near-familiar and unfamiliar test items. Our results provide evidence that infants retain fairly detailed representations of unstressed syllables and therefore support the hypothesis that infants use phonotactic cues to find words in fluent speech.
  • Johnson, E. K., & Tyler, M. (2010). Testing the limits of statistical learning for word segmentation. Developmental Science, 13, 339-345. doi:10.1111/j.1467-7687.2009.00886.x.

    Abstract

    Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5- and 8-month-olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5-month-olds are extremely sensitive to the conditional probabilities in their environment. However, either age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested.
  • Jolink, A. (2005). Finite linking in normally developing Dutch children and children with specific language impairment. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 61-81.
  • Jordan, F., & Dunn, M. (2010). Kin term diversity is the result of multilevel, historical processes [Comment on Doug Jones]. Behavioral and Brain Sciences, 33, 388. doi:10.1017/S0140525X10001962.

    Abstract

    Explanations in the domain of kinship can be sought on several different levels: Jones addresses online processing, as well as issues of origins and innateness. We argue that his framework can more usefully be applied at the levels of developmental and historical change, the latter especially. A phylogenetic approach to the diversity of kinship terminologies is most urgently required.
  • Jordanoska, I., Kocher, A., & Bendezú-Araujo, R. (2023). Introduction special issue: Marking the truth: A cross-linguistic approach to verum. Zeitschrift für Sprachwissenschaft, 42(3), 429-442. doi:10.1515/zfs-2023-2012.

    Abstract

    This special issue focuses on the theoretical and empirical underpinnings of truth-marking. The names that have been used to refer to this phenomenon include, among others, counter-assertive focus, polar(ity) focus, verum focus, emphatic polarity or simply verum. This terminological variety is suggestive of the wide range of ideas and conceptions that characterizes this research field. This collection aims to get closer to the core of what truly constitutes verum. We want to expand the empirical base and determine the common and diverging properties of truth-marking in the languages of the world. The objective is to set a theoretical and empirical baseline for future research on verum and related phenomena.
  • Jordanoska, I., Kocher, A., & Bendezú-Araujo, R. (Eds.). (2023). Marking the truth: A cross-linguistic approach to verum [Special Issue]. Zeitschrift für Sprachwissenschaft, 42(3). Retrieved from https://www.degruyter.com/journal/key/zfsw/42/3/html.
  • Jordens, P. (2004). Systematiek en dynamiek bij de verwerving van Finietheid. Toegepaste Taalwetenschap in Artikelen, 71, 9-22.

    Abstract

    In early Dutch learner varieties, there is no evidence of finiteness being a functional category. There is no V2nd: no correlation between inflectional morphology and movement. Initially, learners express the illocutive function of finiteness through the use of illocutive markers, with the non-use of an illocutive marker expressing the default illocutive function of assertion. Illocutive markers are functioning as adjuncts with scope over the predicate. Illocutive markers become re-analysed as functional elements.The driving force is the acquisition of the auxiliary verbs that occur with past participles. It leads to a reanalysis of illocutive markers as two separate elements: an auxiliary verb and a scope adverb. The (modal) auxiliary carries illocutive function. Lexical verb-argument structure (including the external argument) occurs within the domain of the auxiliary verb. The predicate as the focus constituent occurs within the domain of a scope adverb. This reanalysis establishes a position for the external argument within the domain of AUX. The acquisition of AUX causes the acquisition of a (hierarchical) structure with a complement as a constituent which represents an underlying verb-argument structure, a predicate as the domain of elements that are in focus, and an external (specifier) position as a landing site for elements with topic function.
  • Kakimoto, N., Wongratwanich, P., Shimamoto, H., Kitisubkanchana, J., Tsujimoto, T., Shimabukuro, K., Verdonschot, R. G., Hasegawa, Y., & Murakami, S. (2024). Comparison of T2 values of the displaced unilateral disc and retrodiscal tissue of temporomandibular joints and their implications. Scientific Reports, 14: 1705. doi:10.1038/s41598-024-52092-6.

    Abstract

    Unilateral anterior disc displacement (uADD) has been shown to affect the contralateral joints qualitatively. This study aims to assess the quantitative T2 values of the articular disc and retrodiscal tissue of patients with uADD at 1.5 Tesla (T). The study included 65 uADD patients and 17 volunteers. The regions of interest on T2 maps were evaluated. The affected joints demonstrated significantly higher articular disc T2 values (31.5 ± 3.8 ms) than those of the unaffected joints (28.9 ± 4.5 ms) (P < 0.001). For retrodiscal tissue, T2 values of the unaffected (37.8 ± 5.8 ms) and affected joints (41.6 ± 7.1 ms) were significantly longer than those of normal volunteers (34.4 ± 3.2 ms) (P < 0.001). Furthermore, uADD without reduction (WOR) joints (43.3 ± 6.8 ms) showed statistically higher T2 values than the unaffected joints of both uADD with reduction (WR) (33.9 ± 3.8 ms) and uADDWOR (38.9 ± 5.8 ms), and the affected joints of uADDWR (35.8 ± 4.4 ms). The mean T2 value of the unaffected joints of uADDWOR was significantly longer than that of healthy volunteers (P < 0.001). These results provided quantitative evidence for the influence of the affected joints on the contralateral joints.
  • Kałamała, P., Chuderski, A., Szewczyk, J., Senderecka, M., & Wodniecka, Z. (2023). Bilingualism caught in a net: A new approach to understanding the complexity of bilingual experience. Journal of Experimental Psychology: General, 152(1), 157-174. doi:10.1037/xge0001263.

    Abstract

    The growing importance of research on bilingualism in psychology and neuroscience motivates the need for a psychometric model that can be used to understand and quantify this phenomenon. This research is the first to meet this need. We reanalyzed two data sets (N = 171 and N = 112) from relatively young adult language-unbalanced bilinguals and asked whether bilingualism is best described by the factor structure or by the network structure. The factor and network models were established on one data set and then validated on the other data set in a fully confirmatory manner. The network model provided the best fit to the data. This implies that bilingualism should be conceptualized as an emergent phenomenon arising from direct and idiosyncratic dependencies among the history of language acquisition, diverse language skills, and language-use practices. These dependencies can be reduced to neither a single universal quotient nor to some more general factors. Additional in-depth network analyses showed that the subjective perception of proficiency along with language entropy and language mixing were the most central indices of bilingualism, thus indicating that these measures can be especially sensitive to variation in the overall bilingual experience. Overall, this work highlights the great potential of psychometric network modeling to gain a more accurate description and understanding of complex (psycho)linguistic and cognitive phenomena.
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2024). Morphosyntactic predictive processing in adult heritage speakers: Effects of cue availability and spoken and written language experience. Language, Cognition and Neuroscience, 39(1), 118-135. doi:10.1080/23273798.2023.2254424.

    Abstract

    We investigated prediction skills of adult heritage speakers and the role of written and spoken language experience on predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in verb-medial and verb-final sentences in Turkish with adult Turkish heritage speakers (N = 25) and Turkish monolingual speakers (N = 24). Heritage speakers predicted in verb-medial sentences (when verb-semantic and case-marking cues were available), but not in verb-final sentences (when only case-marking cues were available) while monolinguals predicted in both. Prediction skills of heritage speakers were modulated by their spoken language experience in Turkish and written language experience in both languages. Overall, these results strongly suggest that verb-semantic information is needed to scaffold the use of morphosyntactic cues for prediction in heritage speakers. The findings also support the notion that both spoken and written language experience play an important role in predictive spoken language processing.

Share this page