Publications

Displaying 1 - 100 of 663
  • Abbondanza, F., Dale, P. S., Wang, C. A., Hayiou‐Thomas, M. E., Toseeb, U., Koomar, T. S., Wigg, K. G., Feng, Y., Price, K. M., Kerr, E. N., Guger, S. L., Lovett, M. W., Strug, L. J., Van Bergen, E., Dolan, C. V., Tomblin, J. B., Moll, K., Schulte‐Körne, G., Neuhoff, N., Warnke, A. and 13 moreAbbondanza, F., Dale, P. S., Wang, C. A., Hayiou‐Thomas, M. E., Toseeb, U., Koomar, T. S., Wigg, K. G., Feng, Y., Price, K. M., Kerr, E. N., Guger, S. L., Lovett, M. W., Strug, L. J., Van Bergen, E., Dolan, C. V., Tomblin, J. B., Moll, K., Schulte‐Körne, G., Neuhoff, N., Warnke, A., Fisher, S. E., Barr, C. L., Michaelson, J. J., Boomsma, D. I., Snowling, M. J., Hulme, C., Whitehouse, A. J. O., Pennell, C. E., Newbury, D. F., Stein, J., Talcott, J. B., Bishop, D. V. M., & Paracchini, S. (2023). Language and reading impairments are associated with increased prevalence of non‐right‐handedness. Child Development, 94(4), 970-984. doi:10.1111/cdev.13914.

    Abstract

    Handedness has been studied for association with language-related disorders because of its link with language hemispheric dominance. No clear pattern has emerged, possibly because of small samples, publication bias, and heterogeneous criteria across studies. Non-right-handedness (NRH) frequency was assessed in N = 2503 cases with reading and/or language impairment and N = 4316 sex-matched controls identified from 10 distinct cohorts (age range 6–19 years old; European ethnicity) using a priori set criteria. A meta-analysis (Ncases = 1994) showed elevated NRH % in individuals with language/reading impairment compared with controls (OR = 1.21, CI = 1.06–1.39, p = .01). The association between reading/language impairments and NRH could result from shared pathways underlying brain lateralization, handedness, and cognitive functions.

    Additional information

    supplementary information
  • Abdel Rahman, R., Van Turennout, M., & Levelt, W. J. M. (2003). Phonological encoding is not contingent on semantic feature retrieval: An electrophysiological study on object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(5), 850-860. doi:10.1037/0278-7393.29.5.850.

    Abstract

    In the present study, the authors examined with event-related brain potentials whether phonological encoding in picture naming is mediated by basic semantic feature retrieval or proceeds independently. In a manual 2-choice go/no-go task the choice response depended on a semantic classification (animal vs. object) and the execution decision was contingent on a classification of name phonology (vowel vs. consonant). The introduction of a semantic task mixing procedure allowed for selectively manipulating the speed of semantic feature retrieval. Serial and parallel models were tested on the basis of their differential predictions for the effect of this manipulation on the lateralized readiness potential and N200 component. The findings indicate that phonological code retrieval is not strictly contingent on prior basic semantic feature processing.
  • Abdel Rahman, R., & Sommer, W. (2003). Does phonological encoding in speech production always follow the retrieval of semantic knowledge?: Electrophysiological evidence for parallel processing. Cognitive Brain Research, 16(3), 372-382. doi:10.1016/S0926-6410(02)00305-1.

    Abstract

    In this article a new approach to the distinction between serial/contingent and parallel/independent processing in the human cognitive system is applied to semantic knowledge retrieval and phonological encoding of the word form in picture naming. In two-choice go/nogo tasks pictures of objects were manually classified on the basis of semantic and phonological information. An additional manipulation of the duration of the faster and presumably mediating process (semantic retrieval) allowed to derive differential predictions from the two alternative models. These predictions were tested with two event-related brain potentials (ERPs), the lateralized readiness potential (LRP) and the N200. The findings indicate that phonological encoding can proceed in parallel to the retrieval of semantic features. A suggestion is made how to accommodate these findings with models of speech production.
  • Acerbi, A., Van Leeuwen, E. J. C., Haun, D. B. M., & Tennie, C. (2016). Conformity cannot be identified based on population-level signatures. Scientific Reports, 6: 36068. doi:10.1038/srep36068.

    Abstract

    Conformist transmission, defined as a disproportionate likelihood to copy the majority, is considered a potent mechanism underlying the emergence and stabilization of cultural diversity. However, ambiguity within and across disciplines remains as to how to identify conformist transmission empirically. In most studies, a population level outcome has been taken as the benchmark to evidence conformist transmission: a sigmoidal relation between individuals’ probability to copy the majority and the proportional majority size. Using an individual-based model, we show that, under ecologically plausible conditions, this sigmoidal relation can also be detected without equipping individuals with a conformist bias. Situations in which individuals copy randomly from a fixed subset of demonstrators in the population, or in which they have a preference for one of the possible variants, yield similar sigmoidal patterns as a conformist bias would. Our findings warrant a revisiting of studies that base their conformist transmission conclusions solely on the sigmoidal curve. More generally, our results indicate that population level outcomes interpreted as conformist transmission could potentially be explained by other individual-level strategies, and that more empirical support is needed to prove the existence of an individual-level conformist bias in human and other animals.
  • Adams, H. H. H., Hibar, D. P., Chouraki, V., Stein, J. L., Nyquist, P., Renteria, M. E., Trompet, S., Arias-Vasquez, A., Seshadri, S., Desrivières, S., Beecham, A. H., Jahanshad, N., Wittfeld, K., Van der Lee, S. J., Abramovic, L., Alhusaini, S., Amin, N., Andersson, M., Arfanakis, K. A., Aribisala, B. S. and 322 moreAdams, H. H. H., Hibar, D. P., Chouraki, V., Stein, J. L., Nyquist, P., Renteria, M. E., Trompet, S., Arias-Vasquez, A., Seshadri, S., Desrivières, S., Beecham, A. H., Jahanshad, N., Wittfeld, K., Van der Lee, S. J., Abramovic, L., Alhusaini, S., Amin, N., Andersson, M., Arfanakis, K. A., Aribisala, B. S., Armstrong, N. J., Athanasiu, L., Axelsson, T., Beiser, A., Bernard, M., Bis, J. C., Blanken, L. M. E., Blanton, S. H., Bohlken, M. M., Boks, M. P., Bralten, J., Brickman, A. M., Carmichael, O., Chakravarty, M. M., Chauhan, G., Chen, Q., Ching, C. R. K., Cuellar-Partida, G., Den Braber, A., Doan, N. T., Ehrlich, S., Filippi, I., Ge, T., Giddaluru, S., Goldman, A. L., Gottesman, R. F., Greven, C. U., Grimm, O., Griswold, M. E., Guadalupe, T., Hass, J., Haukvik, U. K., Hilal, S., Hofer, E., Höhn, D., Holmes, A. J., Hoogman, M., Janowitz, D., Jia, T., Karbalai, N., Kasperaviciute, D., Kim, S., Klein, M., Krämer, B., Lee–, P. H., Liao, J., Liewald, D. C. M., Lopez, L. M., Luciano, M., Macare, C., Marquand, A., Matarin, M., Mather, K. A., Mattheisen, M., Mazoyer, B., McKay, D. R., McWhirter, R., Milaneschi, Y., Muetzel, R. L., Muñoz Maniega, S., Nho, K., Nugent, A. C., Olde Loohuis, L. M., Oosterlaan, J., Papmeyer, M., Pappa, I., Pirpamer, L., Pudas, S., Pütz, B., Rajan, K. B., Ramasamy, A., Richards, J. S., Risacher, S. L., Roiz-Santiañez, R., Rommelse, N., Rose, E. J., Royle, N. A., Rundek, T., Sämann, P. G., Satizabal, C. L., Schmaal, L., Schork, A. J., Shen, L., Shin, J., Shumskaya, E., Smith, A. V., Sprooten, E., Strike, L. T., Teumer, A., Thomson, R., Tordesillas-Gutierrez, D., Toro, R., Trabzuni, D., Vaidya, D., Van der Grond, J., Van der Meer, D., Van Donkelaar, M. M. J., Van Eijk, K. R., VanErp, T. G. M., Van Rooij, D., Walton, E., Westlye, L. T., Whelan, C. D., Windham, B. G., Winkler, A. M., Woldehawariat, G., Wolf, C., Wolfers, T., Xu, B., Yanek, L. R., Yang, J., Zijdenbos, A., Zwiers, M. P., Agartz, I., Aggarwal, N. T., Almasy, L., Ames, D., Amouyel, P., Andreassen, O. A., Arepalli, S., Assareh, A. A., Barral, S., Bastin, M. E., Becker, J. T., Becker, D. M., Bennett, D. A., Blangero, J., Van Bokhoven, H., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Bulayeva, K. B., Cahn, W., Calhoun, V. D., Cannon, D. M., Cavalleri, G. L., Chen, C., Cheng, C.-Y., Cichon, S., Cookson, M. R., Corvin, A., Crespo-Facorro, B., Curran, J. E., Czisch, M., Dale, A. M., Davies, G. E., De Geus, E. J. C., De Jager, P. L., De Zubicaray, G. I., Delanty, N., Depondt, C., DeStefano, A., Dillman, A., Djurovic, S., Donohoe, G., Drevets, W. C., Duggirala, R., Dyer, T. D., Erk, S., Espeseth, T., Evans, D. A., Fedko, I. O., Fernández, G., Ferrucci, L., Fisher, S. E., Fleischman, D. A., Ford, I., Foroud, T. M., Fox, P. T., Francks, C., Fukunaga, M., Gibbs, J. R., Glahn, D. C., Gollub, R. L., Göring, H. H. H., Grabe, H. J., Green, R. C., Gruber, O., Guelfi, S., Hansell, N. K., Hardy, J., Hartman, C. A., Hashimoto, R., Hegenscheid, K., Heinz, A., Le Hellard, S., Hernandez, D. G., Heslenfeld, D. J., Ho, B.-C., Hoekstra, P. J., Hoffmann, W., Hofman, A., Holsboer, F., Homuth, G., Hosten, N., Hottenga, J.-J., Hulshoff Pol, H. E., Ikeda, M., Ikram, M. K., Jack Jr, C. R., Jenkinson, M., Johnson, R., Jönsson, E. G., Jukema, J. W., Kahn, R. S., Kanai, R., Kloszewska, I., Knopman, D. S., Kochunov, P., Kwok, J. B., Launer, L. J., Lawrie, S. M., Lemaître, H., Liu, X., Longo, D. L., Longstreth Jr, W. T., Lopez, O. L., Lovestone, S., Martinez, O., Martinot, J.-L., Mattay, V. S., McDonald, C., McIntosh, A. M., McMahon, F. J., McMahon, K. L., Mecocci, P., Melle, I., Meyer-Lindenberg, A., Mohnke, S., Montgomery, G. W., Morris, D. W., Mosley, T. H., Mühleisen, T. W., Müller-Myhsok, B., Nalls, M. A., Nauck, M., Nichols, T. E., Niessen, W. J., Nöthen, M. M., Nyberg, L., Ohi, K., Olvera, R. L., Ophoff, R. A., Pandolfo, M., Paus, T., Pausova, Z., Penninx, B. W. J. H., Pike, G. B., Potkin, S. G., Psaty, B. M., Reppermund, S., Rietschel, M., Roffman, J. L., Romanczuk-Seiferth, N., Rotter, J. I., Ryten, M., Sacco, R. L., Sachdev, P. S., Saykin, A. J., Schmidt, R., Schofield, P. R., Sigursson, S., Simmons, A., Singleton, A., Sisodiya, S. M., Smith, C., Smoller, J. W., Soininen, H., Srikanth, V., Steen, V. M., Stott, D. J., Sussmann, J. E., Thalamuthu, A., Tiemeier, H., Toga, A. W., Traynor, B., Troncoso, J., Turner, J. A., Tzourio, C., Uitterlinden, A. G., Valdés Hernández, M. C., Van der Brug, M., Van der Lugt, A., Van der Wee, N. J. A., Van Duijn, C. M., Van Haren, N. E. M., Van 't Ent, D., Van Tol, M.-J., Vardarajan, B. N., Veltman, D. J., Vernooij, M. W., Völzke, H., Walter, H., Wardlaw, J. M., Wassink, T. H., Weale, M. E., Weinberger, D. R., Weiner, M. W., Wen, W., Westman, E., White, T., Wong, T. Y., Wright, C. B., Zielke, R. H., Zonderman, A. B., the Alzheimer's Disease Neuroimaging Initiative, EPIGEN, IMAGEN, SYS, Deary, I. J., DeCarli, C., Schmidt, H., Martin, N. G., De Craen, A. J. M., Wright, M. J., Gudnason, V., Schumann, G., Fornage, M., Franke, B., Debette, S., Medland, S. E., Ikram, M. A., & Thompson, P. M. (2016). Novel genetic loci underlying human intracranial volume identified through genome-wide association. Nature Neuroscience, 19, 1569-1582. doi:10.1038/nn.4398.

    Abstract

    Intracranial volume reflects the maximally attained brain size during development, and remains stable with loss of tissue in late
    life. It is highly heritable, but the underlying genes remain largely undetermined. In a genome-wide association study of 32,438
    adults, we discovered five previously unknown loci for intracranial volume and confirmed two known signals. Four of the loci were
    also associated with adult human stature, but these remained associated with intracranial volume after adjusting for height.
    We found a high genetic correlation with child head circumference (genetic = 0.748), which indicates a similar genetic
    background and allowed us to identify four additional loci through meta-analysis (Ncombined = 37,345). Variants for intracranial
    volume were also related to childhood and adult cognitive function, and Parkinson’s disease, and were enriched near genes
    involved in growth pathways, including PI3K-AKT signaling. These findings identify the biological underpinnings of intracranial
    volume and provide genetic support for theories on brain reserve and brain overgrowth.
  • Aebi, M., Van Donkelaar, M. M. J., Poelmans, G., Buitelaar, J. K., Sonuga-Barke, E. J., Stringaris, A., Consortium, I., Faraone, S. V., Franke, B., Steinhausen, H. C., & van Hulzen, K. J. (2016). Gene-set and multivariate genome-wide association analysis of oppositional defiant behavior subtypes in attention-deficit/hyperactivity disorder. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 171(5), 573-88. doi:10.1002/ajmg.b.32346.

    Abstract

    Oppositional defiant disorder (ODD) is a frequent psychiatric disorder seen in children and adolescents with attention-deficit-hyperactivity disorder (ADHD). ODD is also a common antecedent to both affective disorders and aggressive behaviors. Although the heritability of ODD has been estimated to be around 0.60, there has been little research into the molecular genetics of ODD. The present study examined the association of irritable and defiant/vindictive dimensions and categorical subtypes of ODD (based on latent class analyses) with previously described specific polymorphisms (DRD4 exon3 VNTR, 5-HTTLPR, and seven OXTR SNPs) as well as with dopamine, serotonin, and oxytocin genes and pathways in a clinical sample of children and adolescents with ADHD. In addition, we performed a multivariate genome-wide association study (GWAS) of the aforementioned ODD dimensions and subtypes. Apart from adjusting the analyses for age and sex, we controlled for "parental ability to cope with disruptive behavior." None of the hypothesis-driven analyses revealed a significant association with ODD dimensions and subtypes. Inadequate parenting behavior was significantly associated with all ODD dimensions and subtypes, most strongly with defiant/vindictive behaviors. In addition, the GWAS did not result in genome-wide significant findings but bioinformatics and literature analyses revealed that the proteins encoded by 28 of the 53 top-ranked genes functionally interact in a molecular landscape centered around Beta-catenin signaling and involved in the regulation of neurite outgrowth. Our findings provide new insights into the molecular basis of ODD and inform future genetic studies of oppositional behavior. (c) 2015 The Authors. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics Published by Wiley Periodicals, Inc.
  • Akker, E., & Cutler, A. (2003). Prosodic cues to semantic structure in native and nonnative listening. Bilingualism: Language and Cognition, 6(2), 81-96. doi:10.1017/S1366728903001056.

    Abstract

    Listeners efficiently exploit sentence prosody to direct attention to words bearing sentence accent. This effect has been explained as a search for focus, furthering rapid apprehension of semantic structure. A first experiment supported this explanation: English listeners detected phoneme targets in sentences more rapidly when the target-bearing words were in accented position or in focussed position, but the two effects interacted, consistent with the claim that the effects serve a common cause. In a second experiment a similar asymmetry was observed with Dutch listeners and Dutch sentences. In a third and a fourth experiment, proficient Dutch users of English heard English sentences; here, however, the two effects did not interact. The results suggest that less efficient mapping of prosody to semantics may be one way in which nonnative listening fails to equal native listening.
  • Alario, F.-X., Schiller, N. O., Domoto-Reilly, K., & Caramazza, A. (2003). The role of phonological and orthographic information in lexical selection. Brain and Language, 84(3), 372-398. doi:10.1016/S0093-934X(02)00556-4.

    Abstract

    We report the performance of two patients with lexico-semantic deficits following left MCA CVA. Both patients produce similar numbers of semantic paraphasias in naming tasks, but presented one crucial difference: grapheme-to-phoneme and phoneme-to-grapheme conversion procedures were available only to one of them. We investigated the impact of this availability on the process of lexical selection during word production. The patient for whom conversion procedures were not operational produced semantic errors in transcoding tasks such as reading and writing to dictation; furthermore, when asked to name a given picture in multiple output modalities—e.g., to say the name of a picture and immediately after to write it down—he produced lexically inconsistent responses. By contrast, the patient for whom conversion procedures were available did not produce semantic errors in transcoding tasks and did not produce lexically inconsistent responses in multiple picture-naming tasks. These observations are interpreted in the context of the summation hypothesis (Hillis & Caramazza, 1991), according to which the activation of lexical entries for production would be made on the basis of semantic information and, when available, on the basis of form-specific information. The implementation of this hypothesis in models of lexical access is discussed in detail.
  • Alhama, R. G., Rowland, C. F., & Kidd, E. (2023). How does linguistic context influence word learning? Journal of Child Language, 50(6), 1374-1393. doi:10.1017/S0305000923000302.

    Abstract

    While there are well-known demonstrations that children can use distributional information to acquire multiple components of language, the underpinnings of these achievements are unclear. In the current paper, we investigate the potential pre-requisites for a distributional learning model that can explain how children learn their first words. We review existing literature and then present the results of a series of computational simulations with Vector Space Models, a type of distributional semantic model used in Computational Linguistics, which we evaluate against vocabulary acquisition data from children. We focus on nouns and verbs, and we find that: (i) a model with flexibility to adjust for the frequency of events provides a better fit to the human data, (ii) the influence of context words is very local, especially for nouns, and (iii) words that share more contexts with other words are harder to learn.
  • Ambridge, B., Bidgood, A., Pine, J. M., & Rowland, C. F. (2016). Is Passive Syntax Semantically Constrained? Evidence From Adult Grammaticality Judgment and Comprehension Studies. Cognitive Science, 40, 1435-1459. doi:10.1111/cogs.12277.

    Abstract

    To explain the phenomenon that certain English verbs resist passivization (e.g., *£5 was cost by the book), Pinker (1989) proposed a semantic constraint on the passive in the adult grammar: The greater the extent to which a verb denotes an action where a patient is affected or acted upon, the greater the extent to which it is compatible with the passive. However, a number of comprehension and production priming studies have cast doubt upon this claim, finding no difference between highly affecting agent-patient/theme-experiencer passives (e.g., Wendy was kicked/frightened by Bob) and non-actional experiencer theme passives (e.g., Wendy was heard by Bob). The present study provides evidence that a semantic constraint is psychologically real, and is readily observed when more fine-grained independent and dependent measures are used (i.e., participant ratings of verb semantics, graded grammaticality judgments, and reaction time in a forced-choice picture-matching comprehension task). We conclude that a semantic constraint on the passive must be incorporated into accounts of the adult grammar.

    Additional information

    cogs12277-sup-0001-DataS1-S2.docx
  • Ameka, F. K. (1999). [Review of M. E. Kropp Dakubu: Korle meets the sea: a sociolinguistic history of Accra]. Bulletin of the School of Oriental and African Studies, 62, 198-199. doi:10.1017/S0041977X0001836X.
  • Ameka, F. K. (1992). Interjections: The universal yet neglected part of speech. Journal of Pragmatics, 18(2/3), 101-118. doi:10.1016/0378-2166(92)90048-G.
  • Ameka, F. K. (1999). Partir c'est mourir un peu: Universal and culture specific features of leave taking. RASK International Journal of Language and Communication, 9/10, 257-283.
  • Ameka, F. K. (1999). Spatial information packaging in Ewe and Likpe: A comparative perspective. Frankfurter Afrikanistische Blätter, 11, 7-34.
  • Ameka, F. K. (1992). The meaning of phatic and conative interjections. Journal of Pragmatics, 18(2/3), 245-271. doi:10.1016/0378-2166(92)90054-F.

    Abstract

    The purpose of this paper is to investigate the meanings of the members of two subclasses of interjections in Ewe: the conative/volitive which are directed at an auditor, and the phatic which are used in the maintenance of social and communicative contact. It is demonstrated that interjections like other linguistic signs have meanings which can be rigorously stated. In addition, the paper explores the differences and similarities between the semantic structures of interjections on one hand and formulaic words on the other. This is done through a comparison of the semantics and pragmatics of an interjection and a formulaic word which are used for welcoming people in Ewe. It is contended that formulaic words are speech acts qua speech acts while interjections are not fully fledged speech acts because they lack illocutionary dictum in their semantic structure.
  • Ameka, F. K. (1999). The typology and semantics of complex nominal duplication in Ewe. Anthropological Linguistics, 41, 75-106.
  • Anichini, M., de Reus, K., Hersh, T. A., Valente, D., Salazar-Casals, A., Berry, C., Keller, P. E., & Ravignani, A. (2023). Measuring rhythms of vocal interactions: A proof of principle in harbour seal pups. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210477. doi:10.1098/rstb.2021.0477.

    Abstract

    Rhythmic patterns in interactive contexts characterize human behaviours such as conversational turn-taking. These timed patterns are also present in other animals, and often described as rhythm. Understanding fine-grained temporal adjustments in interaction requires complementary quantitative methodologies. Here, we showcase how vocal interactive rhythmicity in a non-human animal can be quantified using a multi-method approach. We record vocal interactions in harbour seal pups (Phoca vitulina) under controlled conditions. We analyse these data by combining analytical approaches, namely categorical rhythm analysis, circular statistics and time series analyses. We test whether pups' vocal rhythmicity varies across behavioural contexts depending on the absence or presence of a calling partner. Four research questions illustrate which analytical approaches are complementary versus orthogonal. For our data, circular statistics and categorical rhythms suggest that a calling partner affects a pup's call timing. Granger causality suggests that pups predictively adjust their call timing when interacting with a real partner. Lastly, the ADaptation and Anticipation Model estimates statistical parameters for a potential mechanism of temporal adaptation and anticipation. Our analytical complementary approach constitutes a proof of concept; it shows feasibility in applying typically unrelated techniques to seals to quantify vocal rhythmic interactivity across behavioural contexts.

    Additional information

    supplemental information
  • Arana, S., Pesnot Lerousseau, J., & Hagoort, P. (2023). Deep learning models to study sentence comprehension in the human brain. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2023.2198245.

    Abstract

    Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding. As such, they could be interesting models of the integration of linguistic information in the human brain. We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension. Two main results emerge. First, the neural representation of word meaning aligns with the context-dependent, dense word vectors used by the artificial neural networks. Second, the processing hierarchy that emerges within artificial neural networks broadly matches the brain, but is surprisingly inconsistent across studies. We discuss current challenges in establishing artificial neural networks as process models of natural language comprehension. We suggest exploiting the highly structured representational geometry of artificial neural networks when mapping representations to brain data.

    Additional information

    link to preprint
  • Araújo, S., Faísca, L., Reis, A., Marques, J. F., & Petersson, K. M. (2016). Visual naming deficits in dyslexia: An ERP investigation of different processing domains. Neuropsychologia, 91, 61-76. doi:10.1016/j.neuropsychologia.2016.07.007.

    Abstract

    Naming speed deficits are well documented in developmental dyslexia, expressed by slower naming times and more errors in response to familiar items. Here we used event-related potentials (ERPs) to examine at what processing level the deficits in dyslexia emerge during a discrete-naming task. Dyslexic and skilled adult control readers performed a primed object-naming task, in which the relationship between the prime and the target was manipulated along perceptual, semantic and phonological dimensions. A 3×2 design that crossed Relationship Type (Visual, Phonemic Onset, and Semantic) with Relatedness (Related and Unrelated) was used. An attenuated N/P190 – indexing early visual processing – and N300 – which index late visual processing – was observed to pictures preceded by perceptually related (vs. unrelated) primes in the control but not in the dyslexic group. These findings suggest suboptimal processing in early stages of object processing in dyslexia, when integration and mapping of perceptual information to a more form-specific percept in memory take place. On the other hand, both groups showed an N400 effect associated with semantically related pictures (vs. unrelated), taken to reflect intact integration of semantic similarities in both dyslexic and control readers. We also found an electrophysiological effect of phonological priming in the N400 range – that is, an attenuated N400 to objects preceded by phonemic related primes vs. unrelated – while it showed a more widespread distributed and more pronounced over the right hemisphere in the dyslexics. Topographic differences between groups might have originated from a word form encoding process with different characteristics in dyslexics compared to control readers.
  • Araujo, S., Narang, V., Misra, D., Lohagun, N., Khan, O., Singh, A., Mishra, R. K., Hervais-Adelman, A., & Huettig, F. (2023). A literacy-related color-specific deficit in rapid automatized naming: Evidence from neurotypical completely illiterate and literate adults. Journal of Experimental Psychology: General, 152(8), 2403-2409. doi:10.1037/xge0001376.

    Abstract

    There is a robust positive relationship between reading skills and the time to name aloud an array of letters, digits, objects, or colors as quickly as possible. A convincing and complete explanation for the direction and locus of this association remains, however, elusive. In this study we investigated rapid automatized naming (RAN) of every-day objects and basic color patches in neurotypical illiterate and literate adults. Literacy acquisition and education enhanced RAN performance for both conceptual categories but this advantage was much larger for (abstract) colors than every-day objects. This result suggests that (i) literacy/education may be causal for serial rapid naming ability of non-alphanumeric items, (ii) differences in the lexical quality of conceptual representations can underlie the reading-related differential RAN performance.

    Additional information

    supplementary text
  • Aravena-Bravo, P., Cristia, A., Garcia, R., Kotera, H., Nicolas, R. K., Laranjo, R., Arokoyo, B. E., Benavides-Varela, S., Benders, T., Boll-Avetisyan, N., Cychosz, M., Ben, R. D., Diop, Y., Durán-Urzúa, C., Havron, N., Manalili, M., Narasimhan, B., Omane, P. O., Rowland, C. F., Kolberg, L. S. Aravena-Bravo, P., Cristia, A., Garcia, R., Kotera, H., Nicolas, R. K., Laranjo, R., Arokoyo, B. E., Benavides-Varela, S., Benders, T., Boll-Avetisyan, N., Cychosz, M., Ben, R. D., Diop, Y., Durán-Urzúa, C., Havron, N., Manalili, M., Narasimhan, B., Omane, P. O., Rowland, C. F., Kolberg, L. S., Ssemata, A. S., Styles, S. J., Troncoso-Acosta, B., & Woon, F. T. (2023). Towards diversifying early language development research: The first truly global international summer/winter school on language acquisition (/L+/) 2021. Journal of Cognition and Development. Advance online publication. doi:10.1080/15248372.2023.2231083.

    Abstract

    With a long-term aim of empowering researchers everywhere to contribute to work on language development, we organized the First Truly Global /L+/ International Summer/ Winter School on Language Acquisition, a free 5-day virtual school for early career researchers. In this paper, we describe the school, our experience organizing it, and lessons learned. The school had a diverse organizer team, composed of 26 researchers (17 from under represented areas: Subsaharan Africa, South and Southeast Asia, and Central and South America); and a diverse volunteer team, with a total of 95 volunteers from 35 different countries, nearly half from under represented areas. This helped world-wide Page 5 of 5 promotion of the school, leading to 958 registrations from 88 different countries, with 300 registrants (based in 63 countries, 80% from under represented areas) selected to participate in the synchronous aspects of the event. The school employed asynchronous (pre-recorded lectures, which were close-captioned) and synchronous elements (e.g., discussions to place the recorded lectures into participants' context; networking events) across three time zones. A post-school questionnaire revealed that 99% of participants enjoyed taking part in the school. Not with standing these positive quantitative outcomes, qualitative comments suggested we fell short in several areas, including the geographic diversity among lecturers and greater customization of contents to the participants’ contexts. Although much remains to be done to promote inclusivity in linguistic research, we hope our school will contribute to empowering researchers to investigate and publish on language acquisition in their home languages, to eventually result in more representative theories and empirical generalizations

    Additional information

    https://osf.io/fbnda
  • Asaridou, S. S., Takashima, A., Dediu, D., Hagoort, P., & McQueen, J. M. (2016). Repetition suppression in the left inferior frontal gyrus predicts tone learning performance. Cerebral Cortex, 26(6), 2728-2742. doi:10.1093/cercor/bhv126.

    Abstract

    Do individuals differ in how efficiently they process non-native sounds? To what extent do these differences relate to individual variability in sound-learning aptitude? We addressed these questions by assessing the sound-learning abilities of Dutch native speakers as they were trained on non-native tone contrasts. We used fMRI repetition suppression to the non-native tones to measure participants' neuronal processing efficiency before and after training. Although all participants improved in tone identification with training, there was large individual variability in learning performance. A repetition suppression effect to tone was found in the bilateral inferior frontal gyri (IFGs) before training. No whole-brain effect was found after training; a region-of-interest analysis, however, showed that, after training, repetition suppression to tone in the left IFG correlated positively with learning. That is, individuals who were better in learning the non-native tones showed larger repetition suppression in this area. Crucially, this was true even before training. These findings add to existing evidence that the left IFG plays an important role in sound learning and indicate that individual differences in learning aptitude stem from differences in the neuronal efficiency with which non-native sounds are processed.
  • Aschrafi, A., Verheijen, J., Gordebeke, P. M., Olde Loohuis, N. F., Menting, K., Jager, A., Palkovits, M., Geenen, B., Kos, A., Martens, G. J. M., Glennon, J. C., Kaplan, B. B., Gaszner, B., & Kozicz, T. (2016). MicroRNA-326 acts as a molecular switch in the regulation of midbrain urocortin 1 expression. Journal of Psychiatry & Neuroscience, 41(5), 342-354. doi:10.1503/jpn.150154.

    Abstract

    Background: Altered levels of urocortin 1 (Ucn1) in the centrally projecting Edinger-Westphal nucleus (EWcp) of depressed suicide attempters or completers mediate the brain’s response to stress, while the mechanism regulating Ucn1 expression is unknown. We tested the hypothesis that microRNAs (miRNAs), which are vital fine-tuners of gene expression during the brain’s response to stress, have the capacity to modulate Ucn1 expression. Methods: Computational analysis revealed that the Ucn1 3’ untranslated region contained a conserved binding site for miR-326. We examined miR-326 and Ucn1 levels in the EWcp of depressed suicide completers. In addition, we evaluated miR-326 and Ucn1 levels in the serum and the EWcp of a chronic variable mild stress (CVMS) rat model of behavioural despair and after recovery from CVMS, respectively. Gain and loss of miR-326 function experiments examined the regulation of Ucn1 by this miRNA in cultured midbrain neurons. Results: We found reduced miR-326 levels concomitant with elevated Ucn1 levels in the EWcp of depressed suicide completers as well as in the EWcp of CVMS rats. In CVMS rats fully recovered from stress, both serum and EWcp miR-326 levels rebounded to nonstressed levels. While downregulation of miR-326 levels in primary midbrain neurons enhanced Ucn1 expression levels, miR-326 overexpression selectively reduced the levels of this neuropeptide. Limitations: This study lacked experiments showing that in vivo alteration of miR-326 levels alleviate depression-like behaviours. We show only correlative data for miR-325 and cocaine- and amphetamine-regulated transcript levels in the EWcp. Conclusion: We identified miR-326 dysregulation in depressed suicide completers and characterized this miRNA as an upstream regulator of the Ucn1 neuropeptide expression in midbrain neurons. © 2016 Joule Inc. or its licensors.
  • Assmann, M., Büring, D., Jordanoska, I., & Prüller, M. (2023). Towards a theory of morphosyntactic focus marking. Natural Language & Linguistic Theory. doi:10.1007/s11049-023-09567-4.

    Abstract

    Based on six detailed case studies of languages in which focus is marked morphosyntactically, we propose a novel formal theory of focus marking, which can capture these as well as the familiar English-type prosodic focus marking. Special attention is paid to the patterns of focus syncretism, that is, when different size and/or location of focus are indistinguishably realized by the same form.

    The key ingredients to our approach are that complex constituents (not just words) may be directly focally marked, and that the choice of focal marking is governed by blocking.
  • Backus, A., Schoffelen, J.-M., Szebényi, S., Hanslmayr, S., & Doeller, C. (2016). Hippocampal-prefrontal theta oscillations support memory integration. Current Biology, 26, 450-457. doi:10.1016/j.cub.2015.12.048.

    Abstract

    Integration of separate memories forms the basis of inferential reasoning - an essential cognitive process that enables complex behavior. Considerable evidence suggests that both hippocampus and medial prefrontal cortex (mPFC) play a crucial role in memory integration. Although previous studies indicate that theta oscillations facilitate memory processes, the electrophysiological mechanisms underlying memory integration remain elusive. To bridge this gap, we recorded magnetoencephalography data while participants performed an inference task and employed novel source reconstruction techniques to estimate oscillatory signals from the hippocampus. We found that hippocampal theta power during encoding predicts subsequent memory integration. Moreover, we observed increased theta coherence between hippocampus and mPFC. Our results suggest that integrated memory representations arise through hippocampal theta oscillations, possibly reflecting dynamic switching between encoding and retrieval states, and facilitating communication with mPFC. These findings have important implications for our understanding of memory-based decision making and knowledge acquisition
  • Bak, T., Long, M., Vega-Mendoza, M., & Sorace, A. (2016). Novelty, Challenge, and Practice: The Impact of Intensive Language Learning on Attentional Functions. PLoS One, 11(4): e0153485. doi:10.1371/journal.pone.0153485.

    Abstract

    We examined 33 participants of a one-week Scottish Gaelic course and compared them to 34 controls: 16 active controls who participated in courses of comparable duration and intensity but not involving foreign language learning and 18 passive controls who followed their usual routines. Participants completed auditory tests of attentional inhibition and switching. There was no difference between the groups in any measures at the beginning of the course. At the end of the course, a significant improvement in attention switching was observed in the language group (p < .001) but not the control group (p = .127), independent of the age of participants (18–78 years). Half of the language participants (n = 17) were retested nine months after their course. All those who practiced Gaelic 5 hours or more per week improved from their baseline performance. In contrast, those who practiced 4 hours or fewer showed an inconsistent pattern: some improved while others stayed the same or deteriorated. Our results suggest that even a short period of intensive language learning can modulate attentional functions and that all age groups can benefit from this effect. Moreover, these short-term effects can be maintained through continuous practice.
  • Barak, L., Harmon, Z., Feldman, N. H., Edwards, J., & Shafto, P. (2023). When children's production deviates from observed input: Modeling the variable production of the English past tense. Cognitive Science, 47(8): e13328. doi:10.1111/cogs.13328.

    Abstract

    As children gradually master grammatical rules, they often go through a period of producing form-meaning associations that were not observed in the input. For example, 2- to 3-year-old English-learning children use the bare form of verbs in settings that require obligatory past tense meaning while already starting to produce the grammatical –ed inflection. While many studies have focused on overgeneralization errors, fewer studies have attempted to explain the root of this earlier stage of rule acquisition. In this work, we use computational modeling to replicate children's production behavior prior to the generalization of past tense production in English. We illustrate how seemingly erroneous productions emerge in a model, without being licensed in the grammar and despite the model aiming at conforming to grammatical forms. Our results show that bare form productions stem from a tension between two factors: (1) trying to produce a less frequent meaning (the past tense) and (2) being unable to restrict the production of frequent forms (the bare form) as learning progresses. Like children, our model goes through a stage of bare form production and then converges on adult-like production of the regular past tense, showing that these different stages can be accounted for through a single learning mechanism.
  • Baranova, J., & Dingemanse, M. (2016). Reasons for requests. Discourse Studies, 18(6), 641-675. doi:10.1177/1461445616667154.

    Abstract

    Reasons play an important role in social interaction. We study reasons-giving in the context of request sequences in Russian. By contrasting request sequences with and without reasons, we are able to shed light on the interactional work people do when they provide reasons or ask for them. In a systematic collection of request sequences in everyday conversation (N = 158), we find reasons in a variety of sequential positions, showing the various points at which participants may orient to the need for a reason. Reasons may be left implicit (as in many minimal requests that are readily complied with), or they can be made explicit. Participants may make reasons explicit either as part of the initial formulation of a request or in an interactionally contingent way. Across sequential positions, we show that reasons for requests recurrently deal with three possible issues: (1) providing information when a request is underspecified, (2) managing relationships between the requester and requestee and (3) explicating ancillary actions implemented by a request. By spelling out information normally left to presuppositions and implicatures, reasons make requests more understandable and help participants to navigate the social landscape of asking assistance from others.
  • Barendse, M. T., Ligtvoet, R., Timmerman, M. E., & Oort, F. J. (2016). Model fit after pairwise maximum likelihood. Frontiers in Psychology, 7: 528. doi:10.3389/fpsyg.2016.00528.

    Abstract

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations.
  • Barendse, M. T., & Rosseel, Y. (2023). Multilevel SEM with random slopes in discrete data using the pairwise maximum likelihood. British Journal of Mathematical and Statistical Psychology, 76(2), 327-352. doi:10.1111/bmsp.12294.

    Abstract

    Pairwise maximum likelihood (PML) estimation is a promising method for multilevel models with discrete responses. Multilevel models take into account that units within a cluster tend to be more alike than units from different clusters. The pairwise likelihood is then obtained as the product of bivariate likelihoods for all within-cluster pairs of units and items. In this study, we investigate the PML estimation method with computationally intensive multilevel random intercept and random slope structural equation models (SEM) in discrete data. In pursuing this, we first reconsidered the general ‘wide format’ (WF) approach for SEM models and then extend the WF approach with random slopes. In a small simulation study we the determine accuracy and efficiency of the PML estimation method by varying the sample size (250, 500, 1000, 2000), response scales (two-point, four-point), and data-generating model (mediation model with three random slopes, factor model with one and two random slopes). Overall, results show that the PML estimation method is capable of estimating computationally intensive random intercept and random slopes multilevel models in the SEM framework with discrete data and many (six or more) latent variables with satisfactory accuracy and efficiency. However, the condition with 250 clusters combined with a two-point response scale shows more bias.

    Additional information

    figures
  • Barış Demiral, Ş., Gambi, C., Nieuwland, M. S., & Pickering, M. J. (2016). Neural correlates of verbal joint action: ERPs reveal common perception and action systems in a shared-Stroop task. Brain Research, 1649, 79-89. doi:10.1016/j.brainres.2016.08.025.

    Abstract

    Recent social-cognitive research suggests that the anticipation of co-actors' actions influences people's mental representations. However, the precise nature of such representations is still unclear. In this study we investigated verbal joint representations in a delayed Stroop paradigm, where each participant responded to one color after a short delay. Participants either performed the task as a single actor (single-action, Experiment 1), or they performed it together (joint-action, Experiment 2). We investigated effects of co-actors' actions on the ERP components associated with perceptual conflict (Go N2) and response selection (P3b). Compared to single-action, joint-action reduced the N2 amplitude congruency effect when participants had to respond (Go trials), indicating that representing a co-actor's utterance helped to dissociate action codes and attenuated perceptual conflict for the responding participant. Yet, on NoGo trials the centro-parietal P3 (P3b) component amplitude increased for joint-action, suggesting that participants mapped the stimuli onto the co-actor's upcoming response as if it were their own response. We conclude that people represent others' utterances similarly to the way they represent their own utterances, and that shared perception-action codes for self and others can sometimes reduce, rather than enhance, perceptual conflict.
  • Barrios, A., & Garcia, R. (2023). Filipino children’s acquisition of nominal and verbal markers in L1 and L2 Tagalog. Languages, 8(3): 188. doi:10.3390/languages8030188.

    Abstract

    Western Austronesian languages, like Tagalog, have unique, complex voice systems that require the correct combinations of verbal and nominal markers, raising many questions about their learnability. In this article, we review the experimental and observational studies on both the L1 and L2 acquisition of Tagalog. The reviewed studies reveal error patterns that reflect the complex nature of the Tagalog voice system. The main goal of the article is to present a full picture of commission errors in young Filipino children’s expression of causation and agency in Tagalog by describing patterns of nominal marking and voice marking in L1 Tagalog and L2 Tagalog. It also aims to provide an overview of existing research, as well as characterize research on nominal and verbal acquisition, specifically in terms of research problems, data sources, and methodology. Additionally, we discuss the research gaps in at least fifty years’ worth of studies in the area from the 1960’s to the present, as well as ideas for future research to advance the state of the art.
  • Barthel, M., Sauppe, S., Levinson, S. C., & Meyer, A. S. (2016). The timing of utterance planning in task-oriented dialogue: Evidence from a novel list-completion paradigm. Frontiers in Psychology, 7: 1858. doi:10.3389/fpsyg.2016.01858.

    Abstract

    In conversation, interlocutors rarely leave long gaps between turns, suggesting that next speak- ers begin to plan their turns while listening to the previous speaker. The present experiment used analyses of speech onset latencies and eye-movements in a task-oriented dialogue paradigm to investigate when speakers start planning their response. Adult German participants heard a confederate describe sets of objects in utterances that either ended in a noun (e.g. Ich habe eine Tür und ein Fahrrad (‘I have a door and a bicycle’)) or a verb form (Ich habe eine Tür und ein Fahrrad besorgt (‘I have gotten a door and a bicycle’)), while the presence or absence of the final verb either was or was not predictable from the preceding sentence structure. In response, participants had to name any unnamed objects they could see in their own display in utterances such as Ich habe ein Ei (‘I have an egg’). The main question was when participants started to plan their response. The results are consistent with the view that speakers begin to plan their turn as soon as sufficient information is available to do so, irrespective of further incoming words.
  • Bastiaanse, R., & Ohlerth, A.-K. (2023). Presurgical language mapping: What are we testing? Journal of Personalized Medicine, 13: 376. doi:10.3390/jpm13030376.

    Abstract

    Gliomas are brain tumors infiltrating healthy cortical and subcortical areas that may host cognitive functions, such as language. If these areas are damaged during surgery, the patient might develop word retrieval or articulation problems. For this reason, many glioma patients are operated on awake, while their language functions are tested. For this practice, quite simple tests are used, for example, picture naming. This paper describes the process and timeline of picture naming (noun retrieval) and shows the timeline and localization of the distinguished stages. This is relevant information for presurgical language testing with navigated Magnetic Stimulation (nTMS). This novel technique allows us to identify cortical involved in the language production process and, thus, guides the neurosurgeon in how to approach and remove the tumor. We argue that not only nouns, but also verbs should be tested, since sentences are built around verbs, and sentences are what we use in daily life. This approach’s relevance is illustrated by two case studies of glioma patients.
  • Bastiaansen, M. C. M., & Hagoort, P. (2003). Event-induced theta responses as a window on the dynamics of memory. Cortex, 39(4-5), 967-972. doi:10.1016/S0010-9452(08)70873-6.

    Abstract

    An important, but often ignored distinction in the analysis of EEG signals is that between evoked activity and induced activity. Whereas evoked activity reflects the summation of transient post-synaptic potentials triggered by an event, induced activity, which is mainly oscillatory in nature, is thought to reflect changes in parameters controlling dynamic interactions within and between brain structures. We hypothesize that induced activity may yield information about the dynamics of cell assembly formation, activation and subsequent uncoupling, which may play a prominent role in different types of memory operations. We then describe a number of analysis tools that can be used to study the reactivity of induced rhythmic activity, both in terms of amplitude changes and of phase variability.

    We briefly discuss how alpha, gamma and theta rhythms are thought to be generated, paying special attention to the hypothesis that the theta rhythm reflects dynamic interactions between the hippocampal system and the neocortex. This hypothesis would imply that studying the reactivity of scalp-recorded theta may provide a window on the contribution of the hippocampus to memory functions.

    We review studies investigating the reactivity of scalp-recorded theta in paradigms engaging episodic memory, spatial memory and working memory. In addition, we review studies that relate theta reactivity to processes at the interface of memory and language. Despite many unknowns, the experimental evidence largely supports the hypothesis that theta activity plays a functional role in cell assembly formation, a process which may constitute the neural basis of memory formation and retrieval. The available data provide only highly indirect support for the hypothesis that scalp-recorded theta yields information about hippocampal functioning. It is concluded that studying induced rhythmic activity holds promise as an additional important way to study brain function.
  • Bastiaansen, M. C. M., Böcker, K. B. E., Cluitmans, P. J. M., & Brunia, C. H. M. (1999). Event-related desynchronization related to the anticipation of a stimulus providing knowledge of results. Clinical Neurophysiology, 110, 250-260.

    Abstract

    In the present paper, event-related desynchronization (ERD) in the alpha and beta frequency bands is quantified in order to investigate the processes related to the anticipation of a knowledge of results (KR) stimulus. In a time estimation task, 10 subjects were instructed to press a button 4 s after the presentation of an auditory stimulus. Two seconds after the response they received auditory or visual feedback on the timing of their response. Preceding the button press, a centrally maximal ERD is found. Preceding the visual KR stimulus, an ERD is present that has an occipital maximum. Contrary to expectation, preceding the auditory KR stimulus there are no signs of a modalityspecific ERD. Results are related to a thalamo-cortical gating model which predicts a correspondence between negative slow potentials and ERD during motor preparation and stimulus anticipation.
  • Bastos, A. M., & Schoffelen, J.-M. (2016). A tutorial review of functional connectivity analysis methods and their interpretational pitfalls. Frontiers in Systems Neuroscience, 9: 175. doi:10.3389/fnsys.2015.00175.

    Abstract

    Oscillatory neuronal activity may provide a mechanism for dynamic network coordination. Rhythmic neuronal interactions can be quantified using multiple metrics, each with their own advantages and disadvantages. This tutorial will review and summarize current analysis methods used in the field of invasive and non-invasive electrophysiology to study the dynamic connections between neuronal populations. First, we review metrics for functional connectivity, including coherence, phase synchronization, phase-slope index, and Granger causality, with the specific aim to provide an intuition for how these metrics work, as well as their quantitative definition. Next, we highlight a number of interpretational caveats and common pitfalls that can arise when performing functional connectivity analysis, including the common reference problem, the signal to noise ratio problem, the volume conduction problem, the common input problem, and the sample size bias problem. These pitfalls will be illustrated by presenting a set of MATLAB-scripts, which can be executed by the reader to simulate each of these potential problems. We discuss how these issues can be addressed using current methods.
  • Bauer, B. L. M. (2016). [Review of the book Social variation and the Latin language by James N. Adams]. Folia Linguistica Historica, 37, 315-326. doi:10.1515/flih-2016-0010.
  • Bauer, B. L. M. (2023). Multiplication, addition, and subtraction in numerals: Formal variation in Latin’s decads+ from an Indo-European perspective. Journal of Latin Linguistics, 22(1), 1-56. doi:10.1515/joll-2023-2001.

    Abstract

    While formal variation in Latin’s numerals is generally acknowledged, little is known about (relative) incidence, distribution, context, or linguistic productivity. Addressing this lacuna, this article examines “decads+” in Latin, which convey the numbers between the full decads: the teens (‘eleven’ through ‘nineteen’) as well as the numerals between the higher decads starting at ‘twenty-one’ through ‘ninety-nine’. Latin’s decads+ are compounds and prone to variation. The data, which are drawn from a variety of sources, reveal (a) substantial formal variation in Latin, both internally and typologically; (b) co-existence of several types of formation; (c) productivity of potential borrowings; (d) resilience of early formations; (e) patterns in structure and incidence that anticipate the Romance numerals; and (f) historical trends. From a typological and general linguistic perspective as well, Latin’s decads+ are most relevant because their formal variation involves sequence, connector, and arithmetical operations and because their historical depth shows a gradual shift away from widespread formal variation, eventually resulting in the relatively rigid system found in Romance. Moreover, the combined system attested in decads+ in Latin – based on a combination of inherited, innovative and borrowed patterns and reflecting different stages of development – presents a number of typological inconsistencies that require further assessment

    Files private

    Request files
  • Bavin, E. L., Prendergast, L. A., Kidd, E., Baker, E., & Dissanayake, C. (2016). Online processing of sentences containing noun modification in young children with high-functioning autism. International Journal of Language & Communication Disorders, 51(2), 137-147. doi:10.1111/1460-6984.12191.

    Abstract

    Background: There is variability in the language of children with autism, even those who are high functioning. However, little is known about how they process language structures in real time, including how they handle potential ambiguity, and whether they follow referential constraints. Previous research with older autism spectrum disorder (ASD) participants has shown that these individuals can use context to access rapidly the meaning of ambiguous words. The severity of autism has also been shown to influence the speed in which children with ASD access lexical information. Aims: To understand more about how children with ASD process language in real time (i.e., as it unfolds). The focus was the integration of information and use of referential constraints to identify a referent named in a sentence. Methods & Procedures: We used an eye-tracking task to compare performance between young, high-functioning children with autism (HFA) and children with typical development (TD). A large sample of 5–9-year-old children (mean age = 6;8 years), 48 with HFA and 56 with TD participated; all were attending mainstream schools. For each item participants were shown a display of four images that differed in two dimensions. Each sentence contained an adjective and noun that restricted the choice from four to two (the target and competitor), followed by a prepositional phrase (e.g., the blue square with dots); this added modifying information to provide a unique description of the target. We calculated looking time at the target, the competitor and the two distractors for each 200 ms time interval as children processed the sentence and looked at the display. Generalized estimating equations were used to carry out repeated-measures analyses on the proportion of looking time to target and competitor and time to fixate to target. Outcomes & Results: Children in both groups (HFA and TD) looked at the target and competitor more than at the distractors following the adjective and noun and following the modifying information in the prepositional phrase more at the target. However, the HFA group was significantly slower in both phases and looked proportionally less at the target. Across the sample, IQ and language did not affect the results; however, age and attention had an impact. The older children showed an advantage in processing the information as did the children with higher attention scores. Conclusions & Implications: The HFA group took longer than the TD group to integrate the disambiguating information provided in the course of processing a sentence and integrate it with the visual information, indicating that for the ASD group incremental processing was not as advanced as for children with ASD, and they were less sensitive to referential conventions. Training for young children with ASD on the use of referential conventions and available contextual clues may be of benefit to them in understanding the language they hear.
  • Bavin, E. L., Kidd, E., Prendergast, L. A., & Baker, E. K. (2016). Young Children with ASD Use Lexical and Referential Information During On-line Sentence Processing. Frontiers in Psychology, 7: 171. doi:10.3389/fpsyg.2016.00171.

    Abstract

    Research with adults and older children indicates that verb biases are strong influences on listeners’ interpretations when processing sentences, but they can be overruled. In this paper, we ask two questions: (i) are children with Autism Spectrum Disorder (ASD) who are high functioning sensitive to verb biases like their same age typically developing peers?, and (ii) do young children with ASD and young children with typical development (TD) override strong verb biases to consider alternative interpretations of ambiguous sentences? Participants were aged 5–9 years (mean age 6.65 years): children with ASD who were high functioning and children with TD. In task 1, biasing and neutral verbs were included (e.g., eat cake versus move cake). In task 2, the focus was on whether the prepositional phrase occurring with an instrument biasing verb (e.g., ‘Chop the tree with the axe’) was interpreted as an instrument even if the named item was an implausible instrument (e.g., candle in ‘Cut the cake with the candle’). Overall, the results showed similarities between groups but the ASD group was generally slower. In task 1, both groups looked at the named object faster in the biasing than the non-biasing condition, and in the biasing condition the ASD group looked away from the target more quickly than the TD group. In task 2, both groups identified the target in the prepositional phrase. They were more likely to override the verb instrument bias and consider the alternative (modification) interpretation in the implausible condition (e.g., looking at the picture of a cake with a candle on it’). Our findings indicate that children of age 5 years and above can use context to override verb biases. Additionally, an important component of the sentence processing mechanism is largely intact for young children with ASD who are high functioning. Like children with TD, they draw on verb semantics and plausibility in integrating information. However, they are likely to be slower in processing the language they hear. Based on previous findings of associations between processing speed and cognitive functioning, the implication is that their understanding will be negatively affected, as will their academic outcomes.
  • Becker, M., Guadalupe, T., Franke, B., Hibar, D. P., Renteria, M. E., Stein, J. L., Thompson, P. M., Francks, C., Vernes, S. C., & Fisher, S. E. (2016). Early developmental gene enhancers affect subcortical volumes in the adult human brain. Human Brain Mapping, 37(5), 1788-1800. doi:10.1002/hbm.23136.

    Abstract

    Genome-wide association screens aim to identify common genetic variants contributing to the phenotypic variability of complex traits, such as human height or brain morphology. The identified genetic variants are mostly within noncoding genomic regions and the biology of the genotype–phenotype association typically remains unclear. In this article, we propose a complementary targeted strategy to reveal the genetic underpinnings of variability in subcortical brain volumes, by specifically selecting genomic loci that are experimentally validated forebrain enhancers, active in early embryonic development. We hypothesized that genetic variation within these enhancers may affect the development and ultimately the structure of subcortical brain regions in adults. We tested whether variants in forebrain enhancer regions showed an overall enrichment of association with volumetric variation in subcortical structures of >13,000 healthy adults. We observed significant enrichment of genomic loci that affect the volume of the hippocampus within forebrain enhancers (empirical P = 0.0015), a finding which robustly passed the adjusted threshold for testing of multiple brain phenotypes (cutoff of P < 0.0083 at an alpha of 0.05). In analyses of individual single nucleotide polymorphisms (SNPs), we identified an association upstream of the ID2 gene with rs7588305 and variation in hippocampal volume. This SNP-based association survived multiple-testing correction for the number of SNPs analyzed but not for the number of subcortical structures. Targeting known regulatory regions offers a way to understand the underlying biology that connects genotypes to phenotypes, particularly in the context of neuroimaging genetics. This biology-driven approach generates testable hypotheses regarding the functional biology of identified associations.
  • Benetti, S., Ferrari, A., & Pavani, F. (2023). Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Frontiers in Human Neuroscience, 17: 1108354. doi:10.3389/fnhum.2023.1108354.

    Abstract

    In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective (“lateral processing pathway”). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
  • Bergelson, E., Soderstrom, M., Schwarz, I.-C., Rowland, C. F., Ramírez-Esparza, N., Rague Hamrick, L., Marklund, E., Kalashnikova, M., Guez, A., Casillas, M., Benetti, L., Van Alphen, P. M., & Cristia, A. (2023). Everyday language input and production in 1,001 children from six continents. Proceedings of the National Academy of Sciences of the United States of America, 120(52): 2300671120. doi:10.1073/pnas.2300671120.

    Abstract

    Language is a universal human ability, acquired readily by young children, whootherwise struggle with many basics of survival. And yet, language ability is variableacross individuals. Naturalistic and experimental observations suggest that children’slinguistic skills vary with factors like socioeconomic status and children’s gender.But which factors really influence children’s day-to-day language use? Here, weleverage speech technology in a big-data approach to report on a unique cross-culturaland diverse data set: >2,500 d-long, child-centered audio-recordings of 1,001 2- to48-mo-olds from 12 countries spanning six continents across urban, farmer-forager,and subsistence-farming contexts. As expected, age and language-relevant clinical risksand diagnoses predicted how much speech (and speech-like vocalization) childrenproduced. Critically, so too did adult talk in children’s environments: Children whoheard more talk from adults produced more speech. In contrast to previous conclusionsbased on more limited sampling methods and a different set of language proxies,socioeconomic status (operationalized as maternal education) was not significantlyassociated with children’s productions over the first 4 y of life, and neither weregender or multilingualism. These findings from large-scale naturalistic data advanceour understanding of which factors are robust predictors of variability in the speechbehaviors of young learners in a wide range of everyday contexts
  • Bergmann, C., & Cristia, A. (2016). Development of infants' segmentation of words from native speech: a meta-analytic approach. Developmental Science, 19(6), 901-917. doi:10.1111/desc.12341.

    Abstract

    nfants start learning words, the building blocks of language, at least by 6 months. To do so, they must be able to extract the phonological form of words from running speech. A rich literature has investigated this process, termed word segmentation. We addressed the fundamental question of how infants of different ages segment words from their native language using a meta-analytic approach. Based on previous popular theoretical and experimental work, we expected infants to display familiarity preferences early on, with a switch to novelty preferences as infants become more proficient at processing and segmenting native speech. We also considered the possibility that this switch may occur at different points in time as a function of infants' native language and took into account the impact of various task- and stimulus-related factors that might affect difficulty. The combined results from 168 experiments reporting on data gathered from 3774 infants revealed a persistent familiarity preference across all ages. There was no significant effect of additional factors, including native language and experiment design. Further analyses revealed no sign of selective data collection or reporting. We conclude that models of infant information processing that are frequently cited in this domain may not, in fact, apply in the case of segmenting words from native speech.

    Additional information

    desc12341-sup-0001-sup_material.doc
  • Besharati, S., Forkel, S. J., Kopelman, M., Solms, M., Jenkinson, P., & Fotopoulou, A. (2016). Mentalizing the body: Spatial and social cognition in anosognosia for hemiplegia. Brain, 139(3), 971-985. doi:10.1093/brain/awv390.

    Abstract

    Following right-hemisphere damage, a specific disorder of motor awareness can occur called anosognosia for hemiplegia, i.e. the denial of motor deficits contralateral to a brain lesion. The study of anosognosia can offer unique insights into the neurocognitive basis of awareness. Typically, however, awareness is assessed as a first person judgement and the ability of patients to think about their bodies in more ‘objective’ (third person) terms is not directly assessed. This may be important as right-hemisphere spatial abilities may underlie our ability to take third person perspectives. This possibility was assessed for the first time in the present study. We investigated third person perspective taking using both visuospatial and verbal tasks in right-hemisphere stroke patients with anosognosia ( n = 15) and without anosognosia ( n = 15), as well as neurologically healthy control subjects ( n = 15). The anosognosic group performed worse than both control groups when having to perform the tasks from a third versus a first person perspective. Individual analysis further revealed a classical dissociation between most anosognosic patients and control subjects in mental (but not visuospatial) third person perspective taking abilities. Finally, the severity of unawareness in anosognosia patients was correlated to greater impairments in such third person, mental perspective taking abilities (but not visuospatial perspective taking). In voxel-based lesion mapping we also identified the lesion sites linked with such deficits, including some brain areas previously associated with inhibition, perspective taking and mentalizing, such as the inferior and middle frontal gyri, as well as the supramarginal and superior temporal gyri. These results suggest that neurocognitive deficits in mental perspective taking may contribute to anosognosia and provide novel insights regarding the relation between self-awareness and social cognition.
  • Birchall, J., Dunn, M., & Greenhill, S. J. (2016). A combined comparative and phylogenetic analysis of the Chapacuran language family. International Journal of American Linguistics, 82(3), 255-284. doi:10.1086/687383.

    Abstract

    The Chapacuran language family, with three extant members and nine historically attested lects, has yet to be classified following modern standards in historical linguistics. This paper presents an internal classification of these languages by combining both the traditional comparative method (CM) and Bayesian phylogenetic inference (BPI). We identify multiple systematic sound correspondences and 285 cognate sets of basic vocabulary using the available documentation. These allow us to reconstruct a large portion of the Proto-Chapacuran phonemic inventory and identify tentative major subgroupings. The cognate sets form the input for the BPI analysis, which uses a stochastic Continuous-Time Markov Chain to model the change of these cognate sets over time. We test various models of lexical substitution and evolutionary clocks, and use ethnohistorical information and data collection dates to calibrate the resulting trees. The CM and BPI analyses produce largely congruent results, suggesting a division of the family into three different clades.

    Additional information

    Appendix
  • Bobb, S., Huettig, F., & Mani, N. (2016). Predicting visual information during sentence processing: Toddlers activate an object's shape before it is mentioned. Journal of Experimental Child Psychology, 151, 51-64. doi:10.1016/j.jecp.2015.11.002.

    Abstract

    We examined the contents of language-mediated prediction in toddlers by investigating the extent to which toddlers are sensitive to visual-shape representations of upcoming words. Previous studies with adults suggest limits to the degree to which information about the visual form of a referent is predicted during language comprehension in low constraint sentences. 30-month-old toddlers heard either contextually constraining sentences or contextually neutral sentences as they viewed images that were either identical or shape related to the heard target label. We observed that toddlers activate shape information of upcoming linguistic input in contextually constraining semantic contexts: Hearing a sentence context that was predictive of the target word activated perceptual information that subsequently influenced visual attention toward shape-related targets. Our findings suggest that visual shape is central to predictive language processing in toddlers.
  • Bock, K., Irwin, D. E., Davidson, D. J., & Levelt, W. J. M. (2003). Minding the clock. Journal of Memory and Language, 48, 653-685. doi:10.1016/S0749-596X(03)00007-X.

    Abstract

    Telling time is an exercise in coordinating language production with visual perception. By coupling different ways of saying times with different ways of seeing them, the performance of time-telling can be used to track cognitive transformations from visual to verbal information in connected speech. To accomplish this, we used eyetracking measures along with measures of speech timing during the production of time expressions. Our findings suggest that an effective interface between what has been seen and what is to be said can be constructed within 300 ms. This interface underpins a preverbal plan or message that appears to guide a comparatively slow, strongly incremental formulation of phrases. The results begin to trace the divide between seeing and saying -or thinking and speaking- that must be bridged during the creation of even the most prosaic utterances of a language.
  • Böcker, K. B. E., Bastiaansen, M. C. M., Vroomen, J., Brunia, C. H. M., & de Gelder, B. (1999). An ERP correlate of metrical stress in spoken word recognition. Psychophysiology, 36, 706-720. doi:10.1111/1469-8986.3660706.

    Abstract

    Rhythmic properties of spoken language such as metrical stress, that is, the alternation of strong and weak syllables, are important in speech recognition of stress-timed languages such as Dutch and English. Nineteen subjects listened passively to or discriminated actively between sequences of bisyllabic Dutch words, which started with either a weak or a strong syllable. Weak-initial words, which constitute 12% of the Dutch lexicon, evoked more negativity than strong-initial words in the interval between P2 and N400 components of the auditory event-related potential. This negativity was denoted as N325. The N325 was larger during stress discrimination than during passive listening. N325 was also larger when a weak-initial word followed sequence of strong-initial words than when it followed words with the same stress pattern. The latter difference was larger for listeners who performed well on stress discrimination. It was concluded that the N325 is probably a manifestation of the extraction of metrical stress from the acoustic signal and its transformation into task requirements.
  • Bögels, S., & Levinson, S. C. (2023). Ultrasound measurements of interactive turn-taking in question-answer sequences: Articulatory preparation is delayed but not tied to the response. PLoS One, 18: e0276470. doi:10.1371/journal.pone.0276470.

    Abstract

    We know that speech planning in conversational turn-taking can happen in overlap with the previous turn and research suggests that it starts as early as possible, that is, as soon as the gist of the previous turn becomes clear. The present study aimed to investigate whether planning proceeds all the way up to the last stage of articulatory preparation (i.e., putting the articulators in place for the first phoneme of the response) and what the timing of this process is. Participants answered pre-recorded quiz questions (being under the illusion that they were asked live), while their tongue movements were measured using ultrasound. Planning could start early for some quiz questions (i.e., midway during the question), but late for others (i.e., only at the end of the question). The results showed no evidence for a difference between tongue movements in these two types of questions for at least two seconds after planning could start in early-planning questions, suggesting that speech planning in overlap with the current turn proceeds more slowly than in the clear. On the other hand, when time-locking to speech onset, tongue movements differed between the two conditions from up to two seconds before this point. This suggests that articulatory preparation can occur in advance and is not fully tied to the overt response itself.

    Additional information

    supporting information
  • Bohnemeyer, J. (2003). Invisible time lines in the fabric of events: Temporal coherence in Yukatek narratives. Journal of Linguistic Anthropology, 13(2), 139-162. doi:10.1525/jlin.2003.13.2.139.

    Abstract

    This article examines how narratives are structured in a language in which event order is largely not coded. Yucatec Maya lacks both tense inflections and temporal connectives corresponding to English after and before. It is shown that the coding of events in Yucatec narratives is subject to a strict iconicity constraint within paragraph boundaries. Aspectual viewpoint shifting is used to reconcile iconicity preservation with the requirements of a more flexible narrative structure.
  • Bornkessel-Schlesewsky, I., Alday, P. M., & Schlesewsky, M. (2016). A modality-independent, neurobiological grounding for the combinatory capacity of the language-ready brain: Comment on “Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain” by Michael A. Arbib. Physics of Life Reviews, 16, 55-57. doi:10.1016/j.plrev.2016.01.003.
  • Wu, M., Bosker, H. R., & Riecke, L. (2023). Sentential contextual facilitation of auditory word processing builds up during sentence tracking. Journal of Cognitive Neuroscience, 35(8), 1262 -1278. doi:10.1162/jocn_a_02007.

    Abstract

    While listening to meaningful speech, auditory input is processed more rapidly near the end (vs. beginning) of sentences. Although several studies have shown such word-to-word changes in auditory input processing, it is still unclear from which processing level these word-to-word dynamics originate. We investigated whether predictions derived from sentential context can result in auditory word-processing dynamics during sentence tracking. We presented healthy human participants with auditory stimuli consisting of word sequences, arranged into either predictable (coherent sentences) or less predictable (unstructured, random word sequences) 42-Hz amplitude-modulated speech, and a continuous 25-Hz amplitude-modulated distractor tone. We recorded RTs and frequency-tagged neuroelectric responses 1(auditory steady-state responses) to individual words at multiple temporal positions within the sentences, and quantified sentential context effects at each position while controlling for individual word characteristics (i.e., phonetics, frequency, and familiarity). We found that sentential context increasingly facilitates auditory word processing as evidenced by accelerated RTs and increased auditory steady-state responses to later-occurring words within sentences. These purely top–down contextually driven auditory word-processing dynamics occurred only when listeners focused their attention on the speech and did not transfer to the auditory processing of the concurrent distractor tone. These findings indicate that auditory word-processing dynamics during sentence tracking can originate from sentential predictions. The predictions depend on the listeners' attention to the speech, and affect only the processing of the parsed speech, not that of concurrently presented auditory streams.
  • Bowerman, M. (1975). Commentary on L. Bloom, P. Lightbown, & L. Hood, “Structure and variation in child language”. Monographs of the Society for Research in Child Development, 40(2), 80-90. Retrieved from http://www.jstor.org/stable/1165986.
  • Bramão, I., Reis, A., Petersson, K. M., & Faísca, L. (2016). Knowing that strawberries are red and seeing red strawberries: The interaction between surface colour and colour knowledge information. Journal of Cognitive Psychology, 28(6), 641-657. doi:10.1080/20445911.2016.1182171.

    Abstract

    his study investigates the interaction between surface and colour knowledge information during object recognition. In two different experiments, participants were instructed to decide whether two presented stimuli belonged to the same object identity. On the non-matching trials, we manipulated the shape and colour knowledge information activated by the two stimuli by creating four different stimulus pairs: (1) similar in shape and colour (e.g. TOMATO–APPLE); (2) similar in shape and dissimilar in colour (e.g. TOMATO–COCONUT); (3) dissimilar in shape and similar in colour (e.g. TOMATO–CHILI PEPPER) and (4) dissimilar in both shape and colour (e.g. TOMATO–PEANUT). The object pictures were presented in typical and atypical colours and also in black-and-white. The interaction between surface and colour knowledge showed to be contingent upon shape information: while colour knowledge is more important for recognising structurally similar shaped objects, surface colour is more prominent for recognising structurally dissimilar shaped objects.
  • Brehm, L., & Goldrick, M. (2016). Empirical and conceptual challenges for neurocognitive theories of language production. Language, Cognition and Neuroscience, 31(4), 504-507. doi:10.1080/23273798.2015.1110604.
  • Broersma, M., Carter, D., & Acheson, D. J. (2016). Cognate costs in bilingual speech production: Evidence from language switching. Frontiers in Psychology, 7: 1461. doi:10.3389/fpsyg.2016.01461.

    Abstract

    This study investigates cross-language lexical competition in the bilingual mental lexicon. It provides evidence for the occurrence of inhibition as well as the commonly reported facilitation during the production of cognates (words with similar phonological form and meaning in two languages) in a mixed picture naming task by highly proficient Welsh-English bilinguals. Previous studies have typically found cognate facilitation. It has previously been proposed (with respect to non-cognates) that cross-language inhibition is limited to low-proficient bilinguals; therefore, we tested highly proficient, early bilinguals. In a mixed naming experiment (i.e., picture naming with language switching), 48 highly proficient, early Welsh-English bilinguals named pictures in Welsh and English, including cognate and non-cognate targets. Participants were English-dominant, Welsh-dominant, or had equal language dominance. The results showed evidence for cognate inhibition in two ways. First, both facilitation and inhibition were found on the cognate trials themselves, compared to non-cognate controls, modulated by the participants' language dominance. The English-dominant group showed cognate inhibition when naming in Welsh (and no difference between cognates and controls when naming in English), and the Welsh-dominant and equal dominance groups generally showed cognate facilitation. Second, cognate inhibition was found as a behavioral adaptation effect, with slower naming for non-cognate filler words in trials after cognates than after non-cognate controls. This effect was consistent across all language dominance groups and both target languages, suggesting that cognate production involved cognitive control even if this was not measurable in the cognate trials themselves. Finally, the results replicated patterns of symmetrical switch costs, as commonly reported for balanced bilinguals. We propose that cognate processing might be affected by two different processes, namely competition at the lexical-semantic level and facilitation at the word form level, and that facilitation at the word form level might (sometimes) outweigh any effects of inhibition at the lemma level. In sum, this study provides evidence that cognate naming can cause costs in addition to benefits. The finding of cognate inhibition, particularly for the highly proficient bilinguals tested, provides strong evidence for the occurrence of lexical competition across languages in the bilingual mental lexicon.
  • Brown, C. M., Hagoort, P., & Ter Keurs, M. (1999). Electrophysiological signatures of visual lexical processing: open en closed-class words. Journal of Cognitive Neuroscience, 11(3), 261-281.

    Abstract

    In this paper presents evidence of the disputed existence of an electrophysiological marker for the lexical-categorical distinction between open- and closed-class words. Event-related brain potentials were recorded from the scalp while subjects read a story. Separate waveforms were computed for open- and closed-class words. Two aspects of the waveforms could be reliably related to vocabulary class. The first was an early negativity in the 230- to 350-msec epoch, with a bilateral anterior predominance. This negativity was elicited by open- and closed-class words alike, was not affected by word frequency or word length, and had an earlier peak latency for closed-class words. The second was a frontal slow negative shift in the 350- to 500-msec epoch, largest over the left side of the scalp. This late negativity was only elicited by closed-class words. Although the early negativity cannot serve as a qualitative marker of the open- and closed-class distinction, it does reflect the earliest electrophysiological manifestation of the availability of categorical information from the mental lexicon. These results suggest that the brain honors the distinction between open- and closed-class words, in relation to the different roles that they play in on-line sentence processing.
  • Brown, P. (1999). Anthropologie cognitive. Anthropologie et Sociétés, 23(3), 91-119.

    Abstract

    In reaction to the dominance of universalism in the 1970s and '80s, there have recently been a number of reappraisals of the relation between language and cognition, and the field of cognitive anthropology is flourishing in several new directions in both America and Europe. This is partly due to a renewal and re-evaluation of approaches to the question of linguistic relativity associated with Whorf, and partly to the inspiration of modern developments in cognitive science. This review briefly sketches the history of cognitive anthropology and surveys current research on both sides of the Atlantic. The focus is on assessing current directions, considering in particular, by way of illustration, recent work in cultural models and on spatial language and cognition. The review concludes with an assessment of how cognitive anthropology could contribute directly both to the broader project of cognitive science and to the anthropological study of how cultural ideas and practices relate to structures and processes of human cognition.
  • Brown, P., & Levinson, S. C. (1992). 'Left' and 'right' in Tenejapa: Investigating a linguistic and conceptual gap. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 45(6), 590-611.

    Abstract

    From the perspective of a Kantian belief in the fundamental human tendency to cleave space along the three planes of the human body, Tenejapan Tzeltal exhibits a linguistic gap: there are no linguistic expressions that designate regions (as in English to my left) or describe the visual field (as in to the left of the tree) on the basis of a plane bisecting the body into a left and right side. Tenejapans have expressions for left and right hands (xin k'ab and wa'el k'ab), but these are basically body-part terms, they are not generalized to form a division of space. This paper describes the results of various elicited producton tasks in which concepts of left and right would provide a simple solution, showing that Tenejapan consultants use other notions even when the relevant linguistic distinctions could be made in Tzeltal (e.g. describing the position of one's limbs, or describing rotation of one's body). Instead of using the left-hand/right-hand distinction to construct a division of space, Tenejapans utilize a number of other systems: (i) an absolute, 'cardinal direction' system, supplemented by reference to other geographic or landmark directions, (ii) a generative segmentation of objects and places into analogic body-parts or other kinds of parts, and (iii) a rich system of positional adjectives to describe the exact disposition of things. These systems work conjointly to specify locations with precision and elegance. The overall system is not primarily egocentric, and it makes no essential reference to planes through the human body.
  • Brown, P. (1999). Repetition [Encyclopedia entry for 'Lexicon for the New Millenium', ed. Alessandro Duranti]. Journal of Linguistic Anthropology, 9(2), 223-226. doi:10.1525/jlin.1999.9.1-2.223.

    Abstract

    This is an encyclopedia entry describing conversational and interactional uses of linguistic repetition.
  • Bruggeman, L., & Cutler, A. (2023). Listening like a native: Unprofitable procedures need to be discarded. Bilingualism: Language and Cognition, 26(5), 1093-1102. doi:10.1017/S1366728923000305.

    Abstract

    Two languages, historically related, both have lexical stress, with word stress distinctions signalled in each by the same suprasegmental cues. In each language, words can overlap segmentally but differ in placement of primary versus secondary stress (OCtopus, ocTOber). However, secondary stress occurs more often in the words of one language, Dutch, than in the other, English, and largely because of this, Dutch listeners find it helpful to use suprasegmental stress cues when recognising spoken words. English listeners, in contrast, do not; indeed, Dutch listeners can outdo English listeners in correctly identifying the source words of English word fragments (oc-). Here we show that Dutch-native listeners who reside in an English-speaking environment and have become dominant in English, though still maintaining their use of these stress cues in their L1, ignore the same cues in their L2 English, performing as poorly in the fragment identification task as the L1 English do.
  • Bulut, T. (2023). Domain‐general and domain‐specific functional networks of Broca's area underlying language processing. Brain and Behavior, 13(7): e3046. doi:10.1002/brb3.3046.

    Abstract

    Introduction
    Despite abundant research on the role of Broca's area in language processing, there is still no consensus on language specificity of this region and its connectivity network.

    Methods
    The present study employed the meta-analytic connectivity modeling procedure to identify and compare domain-specific (language-specific) and domain-general (shared between language and other domains) functional connectivity patterns of three subdivisions within the broadly defined Broca's area: pars opercularis (IFGop), pars triangularis (IFGtri), and pars orbitalis (IFGorb) of the left inferior frontal gyrus.

    Results
    The findings revealed a left-lateralized frontotemporal network for all regions of interest underlying domain-specific linguistic functions. The domain-general network, however, spanned frontoparietal regions that overlap with the multiple-demand network and subcortical regions spanning the thalamus and the basal ganglia.

    Conclusions
    The findings suggest that language specificity of Broca's area emerges within a left-lateralized frontotemporal network, and that domain-general resources are garnered from frontoparietal and subcortical networks when required by task demands.

    Additional information

    Supporting Information Data availability
  • Burenhult, N. (2003). Attention, accessibility, and the addressee: The case of the Jahai demonstrative ton. Pragmatics, 13(3), 363-379.
  • Carota, F., Bozic, M., & Marslen-Wilson, W. (2016). Decompositional Representation of Morphological Complexity: Multivariate fMRI Evidence from Italian. Journal of Cognitive Neuroscience, 28(12), 1878-1896. doi:10.1162/jocn\_a\_01009.

    Abstract

    Derivational morphology is a cross-linguistically dominant mechanism for word formation, combining existing words with derivational affixes to create new word forms. However, the neurocognitive mechanisms underlying the representation and processing of such forms remain unclear. Recent cross-linguistic neuroimaging research suggests that derived words are stored and accessed as whole forms, without engaging the left-hemisphere perisylvian network associated with combinatorial processing of syntactically and inflectionally complex forms. Using fMRI with a “simple listening” no-task procedure, we reexamine these suggestions in the context of the root-based combinatorially rich Italian lexicon to clarify the role of semantic transparency (between the derived form and its stem) and affix productivity in determining whether derived forms are decompositionally represented and which neural systems are involved. Combined univariate and multivariate analyses reveal a key role for semantic transparency, modulated by affix productivity. Opaque forms show strong cohort competition effects, especially for words with nonproductive suffixes (ventura, “destiny”). The bilateral frontotemporal activity associated with these effects indicates that opaque derived words are processed as whole forms in the bihemispheric language system. Semantically transparent words with productive affixes (libreria, “bookshop”) showed no effects of lexical competition, suggesting morphologically structured co-representation of these derived forms and their stems, whereas transparent forms with nonproductive affixes (pineta, pine forest) show intermediate effects. Further multivariate analyses of the transparent derived forms revealed affix productivity effects selectively involving left inferior frontal regions, suggesting that the combinatorial and decompositional processes triggered by such forms can vary significantly across languages.
  • Carota, F., Nili, H., Kriegeskorte, N., & Pulvermüller, F. (2023). Experientially-grounded and distributional semantic vectors uncover dissociable representations of semantic categories. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2023.2232481.

    Abstract

    Neuronal populations code similar concepts by similar activity patterns across the human brain's semantic networks. However, it is unclear to what extent such meaning-to-symbol mapping reflects distributional statistics, or experiential information grounded in sensorimotor and emotional knowledge. We asked whether integrating distributional and experiential data better distinguished conceptual categories than each method taken separately. We examined the similarity structure of fMRI patterns elicited by visually presented action- and object-related words using representational similarity analysis (RSA). We found that the distributional and experiential/integrative models respectively mapped the high-dimensional semantic space in left inferior frontal, anterior temporal, and in left precentral, posterior inferior/middle temporal cortex. Furthermore, results from model comparisons uncovered category-specific similarity patterns, as both distributional and experiential models matched the similarity patterns for action concepts in left fronto-temporal cortex, whilst the experiential/integrative (but not distributional) models matched the similarity patterns for object concepts in left fusiform and angular gyrus.
  • Carota, F., Schoffelen, J.-M., Oostenveld, R., & Indefrey, P. (2023). Parallel or sequential? Decoding conceptual and phonological/phonetic information from MEG signals during language production. Cognitive Neuropsychology, 40(5-6), 298-317. doi:10.1080/02643294.2023.2283239.

    Abstract

    Speaking requires the temporally coordinated planning of core linguistic information, from conceptual meaning to articulation. Recent neurophysiological results suggested that these operations involve a cascade of neural events with subsequent onset times, whilst competing evidence suggests early parallel neural activation. To test these hypotheses, we examined the sources of neuromagnetic activity recorded from 34 participants overtly naming 134 images from 4 object categories (animals, tools, foods and clothes). Within each category, word length and phonological neighbourhood density were co-varied to target phonological/phonetic processes. Multivariate pattern analyses (MVPA) searchlights in source space decoded object categories in occipitotemporal and middle temporal cortex, and phonological/phonetic variables in left inferior frontal (BA 44) and motor cortex early on. The findings suggest early activation of multiple variables due to intercorrelated properties and interactivity of processing, thus raising important questions about the representational properties of target words during the preparatory time enabling overt speaking.
  • Carrion Castillo, A., van Bergen, E., Vino, A., van Zuijen, T., de Jong, P. F., Francks, C., & Fisher, S. E. (2016). Evaluation of results from genome-wide studies of language and reading in a novel independent dataset. Genes, Brain and Behavior, 15(6), 531-541. doi:10.1111/gbb.12299.

    Abstract

    Recent genome wide association scans (GWAS) for reading and language abilities have pin-pointed promising new candidate loci. However, the potential contributions of these loci remain to be validated. In the present study, we tested 17 of the most significantly associated single nucleotide polymorphisms (SNPs) from these GWAS studies (p < 10−6 in the original studies) in a new independent population dataset from the Netherlands: known as FIOLA (Familial Influences On Literacy Abilities). This dataset comprised 483 children from 307 nuclear families, plus 505 adults (including parents of participating children), and provided adequate statistical power to detect the effects that were previously reported. The following measures of reading and language performance were collected: word reading fluency, nonword reading fluency, phonological awareness, and rapid automatized naming. Two SNPs (rs12636438, rs7187223) were associated with performance in multivariate and univariate testing, but these did not remain significant after correction for multiple testing. Another SNP (rs482700) was only nominally associated in the multivariate test. For the rest of the SNPs we did not find supportive evidence of association. The findings may reflect differences between our study and the previous investigations in respects such as the language of testing, the exact tests used, and the recruitment criteria. Alternatively, most of the prior reported associations may have been false positives. A larger scale GWAS meta-analysis than those previously performed will likely be required to obtain robust insights into the genomic architecture underlying reading and language.
  • Casillas, M., Bobb, S. C., & Clark, E. V. (2016). Turn taking, timing, and planning in early language acquisition. Journal of Child Language, 43, 1310-1337. doi:10.1017/S0305000915000689.

    Abstract

    Young children answer questions with longer delays than adults do, and they don't reach typical adult response times until several years later. We hypothesized that this prolonged pattern of delay in children's timing results from competing demands: to give an answer, children must understand a question while simultaneously planning and initiating their response. Even as children get older and more efficient in this process, the demands on them increase because their verbal responses become more complex. We analyzed conversational question-answer sequences between caregivers and their children from ages 1;8 to 3;5, finding that children (1) initiate simple answers more quickly than complex ones, (2) initiate simple answers quickly from an early age, and (3) initiate complex answers more quickly as they grow older. Our results suggest that children aim to respond quickly from the start, improving on earlier-acquired answer types while they begin to practice later-acquired, slower ones.

    Additional information

    S0305000915000689sup001.docx
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2023). Ten-month-old infants’ neural tracking of naturalistic speech is not facilitated by the speaker’s eye gaze. Developmental Cognitive Neuroscience, 64: 101297. doi:10.1016/j.dcn.2023.101297.

    Abstract

    Eye gaze is a powerful ostensive cue in infant-caregiver interactions, with demonstrable effects on language acquisition. While the link between gaze following and later vocabulary is well-established, the effects of eye gaze on other aspects of language, such as speech processing, are less clear. In this EEG study, we examined the effects of the speaker’s eye gaze on ten-month-old infants’ neural tracking of naturalistic audiovisual speech, a marker for successful speech processing. Infants watched videos of a speaker telling stories, addressing the infant with direct or averted eye gaze. We assessed infants’ speech-brain coherence at stress (1–1.75 Hz) and syllable (2.5–3.5 Hz) rates, tested for differences in attention by comparing looking times and EEG theta power in the two conditions, and investigated whether neural tracking predicts later vocabulary. Our results showed that infants’ brains tracked the speech rhythm both at the stress and syllable rates, and that infants’ neural tracking at the syllable rate predicted later vocabulary. However, speech-brain coherence did not significantly differ between direct and averted gaze conditions and infants did not show greater attention to direct gaze. Overall, our results suggest significant neural tracking at ten months, related to vocabulary development, but not modulated by speaker’s gaze.

    Additional information

    supplementary material
  • Chabout, J., Sarkar, A., Patel, S., Radden, T., Dunson, D., Fisher, S. E., & Jarvis, E. (2016). A Foxp2 mutation implicated in human speech deficits alters sequencing of ultrasonic vocalizations in adult male mice. Frontiers in Behavioral Neuroscience, 10: 197. doi:10.3389/fnbeh.2016.00197.

    Abstract

    Development of proficient spoken language skills is disrupted by mutations of the FOXP2 transcription factor. A heterozygous missense mutation in the KE family causes speech apraxia, involving difficulty producing words with complex learned sequences of syllables. Manipulations in songbirds have helped to elucidate the role of this gene in vocal learning, but findings in non-human mammals have been limited or inconclusive. Here we performed a systematic study of ultrasonic vocalizations (USVs) of adult male mice carrying the KE family mutation. Using novel statistical tools, we found that Foxp2 heterozygous mice did not have detectable changes in USV syllable acoustic structure, but produced shorter sequences and did not shift to more complex syntax in social contexts where wildtype animals did. Heterozygous mice also displayed a shift in the position of their rudimentary laryngeal motor cortex layer-5 neurons. Our findings indicate that although mouse USVs are mostly innate, the underlying contributions of FoxP2 to sequencing of vocalizations are conserved with humans.
  • Chang, F., Tatsumi, T., Hiranuma, Y., & Bannard, C. (2023). Visual heuristics for verb production: Testing a deep‐learning model with experiments in Japanese. Cognitive Science, 47(8): e13324. doi:10.1111/cogs.13324.

    Abstract

    Tense/aspect morphology on verbs is often thought to depend on event features like telicity, but it is not known how speakers identify these features in visual scenes. To examine this question, we asked Japanese speakers to describe computer-generated animations of simple actions with variation in visual features related to telicity. Experiments with adults and children found that they could use goal information in the animations to select appropriate past and progressive verb forms. They also produced a large number of different verb forms. To explain these findings, a deep-learning model of verb production from visual input was created that could produce a human-like distribution of verb forms. It was able to use visual cues to select appropriate tense/aspect morphology. The model predicted that video duration would be related to verb complexity, and past tense production would increase when it received the endpoint as input. These predictions were confirmed in a third study with Japanese adults. This work suggests that verb production could be tightly linked to visual heuristics that support the understanding of events.
  • Chen, A., Çetinçelik, M., Roncaglia-Denissen, M. P., & Sadakata, M. (2023). Native language, L2 experience, and pitch processing in music. Linguistic Approaches to Bilingualism, 13(2), 218-237. doi:10.1075/lab.20030.che.

    Abstract

    The current study investigated how the role of pitch in one’s native language and L2 experience influenced musical melodic processing by testing Turkish and Mandarin Chinese advanced and beginning learners of English as an L2. Pitch has a lower functional load and shows a simpler pattern in Turkish than in Chinese as the former only contrasts between presence and the absence of pitch elevation, while the latter makes use of four different pitch contours lexically. Using the Musical Ear Test as the tool, we found that the Chinese listeners outperformed the Turkish listeners, and the advanced L2 learners outperformed the beginning learners. The Turkish listeners were further tested on their discrimination of bisyllabic Chinese lexical tones, and again an L2 advantage was observed. No significant difference was found for working memory between the beginning and advanced L2 learners. These results suggest that richness of tonal inventory of the native language is essential for triggering a music processing advantage, and on top of the tone language advantage, the L2 experience yields a further enhancement. Yet, unlike the tone language advantage that seems to relate to pitch expertise, learning an L2 seems to improve sound discrimination in general, and such improvement exhibits in non-native lexical tone discrimination.
  • Choi, S., McDonough, L., Bowerman, M., & Mandler, J. M. (1999). Early sensitivity to language-specific spatial categories in English and Korean. Cognitive Development, 14, 241-268. doi:10.1016/S0885-2014(99)00004-0.

    Abstract

    This study investigates young children’s comprehension of spatial terms in two languages that categorize space strikingly differently. English makes a distinction between actions resulting in containment (put in) versus support or surface attachment (put on), while Korean makes a cross-cutting distinction between tight-fit relations (kkita) versus loose-fit or other contact relations (various verbs). In particular, the Korean verb kkita refers to actions resulting in a tight-fit relation regardless of containment or support. In a preferential looking study we assessed the comprehension of in by 20 English learners and kkita by 10 Korean learners, all between 18 and 23 months. The children viewed pairs of scenes while listening to sentences with and without the target word. The target word led children to gaze at different and language-appropriate aspects of the scenes. We conclude that children are sensitive to language-specific spatial categories by 18–23 months.
  • Chu, M., & Kita, S. (2016). Co-thought and Co-speech Gestures Are Generated by the Same Action Generation Process. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(2), 257-270. doi:10.1037/xlm0000168.

    Abstract

    People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments 1 and 2). This suggests that the 2 types of gestures are generated from the same process. We then investigated whether both types of gestures can be generated from the representational use of the action generation process that also generates purposeful actions that have a direct physical impact on the world, such as manipulating an object or locomotion (the action generation hypothesis). To this end, we examined the effect of object affordances on the production of both types of gestures (Experiments 3 and 4). We found that individuals produced co-thought and co-speech gestures more often when the stimulus objects afforded action (objects with a smooth surface) than when they did not (objects with a spiky surface). These results support the action generation hypothesis for representational gestures. However, our findings are incompatible with the hypothesis that co-speech representational gestures are solely generated from the speech production process (the speech production hypothesis).
  • Clifton, Jr., C., Cutler, A., McQueen, J. M., & Van Ooijen, B. (1999). The processing of inflected forms. [Commentary on H. Clahsen: Lexical entries and rules of language.]. Behavioral and Brain Sciences, 22, 1018-1019.

    Abstract

    Clashen proposes two distinct processing routes, for regularly and irregularly inflected forms, respectively, and thus is apparently making a psychological claim. We argue his position, which embodies a strictly linguistic perspective, does not constitute a psychological processing model.
  • Clough, S., Morrow, E., Mutlu, B., Turkstra, L., & Duff, M. C. C. (2023). Emotion recognition of faces and emoji in individuals with moderate-severe traumatic brain injury. Brain Injury, 37(7), 596-610. doi:10.1080/02699052.2023.2181401.

    Abstract

    Background. Facial emotion recognition deficits are common after moderate-severe traumatic brain injury (TBI) and linked to poor social outcomes. We examine whether emotion recognition deficits extend to facial expressions depicted by emoji.
    Methods. Fifty-one individuals with moderate-severe TBI (25 female) and fifty-one neurotypical peers (26 female) viewed photos of human faces and emoji. Participants selected the best-fitting label from a set of basic emotions (anger, disgust, fear, sadness, neutral, surprise, happy) or social emotions (embarrassed, remorseful, anxious, neutral, flirting, confident, proud).
    Results. We analyzed the likelihood of correctly labeling an emotion by group (neurotypical, TBI), stimulus condition (basic faces, basic emoji, social emoji), sex (female, male), and their interactions. Participants with TBI did not significantly differ from neurotypical peers in overall emotion labeling accuracy. Both groups had poorer labeling accuracy for emoji compared to faces. Participants with TBI (but not neurotypical peers) had poorer accuracy for labeling social emotions depicted by emoji compared to basic emotions depicted by emoji. There were no effects of participant sex.
    Discussion. Because emotion representation is more ambiguous in emoji than human faces, studying emoji use and perception in TBI is an important consideration for understanding functional communication and social participation after brain injury.
  • Clough, S., Padilla, V.-G., Brown-Schmidt, S., & Duff, M. C. (2023). Intact speech-gesture integration in narrative recall by adults with moderate-severe traumatic brain injury. Neuropsychologia, 189: 108665. doi:10.1016/j.neuropsychologia.2023.108665.

    Abstract

    Purpose

    Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying “He searched for a new recipe” while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays.

    Methods

    60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., “He searched for a new recipe”), a Gesture Match (e.g., “He searched for a new recipe online), or Other (“He looked for a new recipe”). We also examined whether participants produced representative gestures themselves when retelling these details.

    Results

    Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story.

    Conclusion

    We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population.
  • Clough, S., Tanguay, A. F. N., Mutlu, B., Turkstra, L., & Duff, M. C. (2023). How do individuals with and without traumatic brain injury interpret emoji? Similarities and differences in perceived valence, arousal, and emotion representation. Journal of Nonverbal Communication, 47, 489-511. doi:10.1007/s10919-023-00433-w.

    Abstract

    Impaired facial affect recognition is common after traumatic brain injury (TBI) and linked to poor social outcomes. We explored whether perception of emotions depicted by emoji is also impaired after TBI. Fifty participants with TBI and 50 non-injured peers generated free-text labels to describe emotions depicted by emoji and rated their levels of valence and arousal on nine-point rating scales. We compared how the two groups’ valence and arousal ratings were clustered and examined agreement in the words participants used to describe emoji. Hierarchical clustering of affect ratings produced four emoji clusters in the non-injured group and three emoji clusters in the TBI group. Whereas the non-injured group had a strongly positive and a moderately positive cluster, the TBI group had a single positive valence cluster, undifferentiated by arousal. Despite differences in cluster numbers, hierarchical structures of the two groups’ emoji ratings were significantly correlated. Most emoji had high agreement in the words participants with and without TBI used to describe them. Participants with TBI perceived emoji similarly to non-injured peers, used similar words to describe emoji, and rated emoji similarly on the valence dimension. Individuals with TBI showed small differences in perceived arousal for a minority of emoji. Overall, results suggest that basic recognition processes do not explain challenges in computer-mediated communication reported by adults with TBI. Examining perception of emoji in context by people with TBI is an essential next step for advancing our understanding of functional communication in computer-mediated contexts after brain injury.

    Additional information

    supplementary information
  • Collins, J. (2016). The role of language contact in creating correlations between humidity and tone. Journal of Language Evolution, 46-52. doi:10.1093/jole/lzv012.
  • Coopmans, C. W., Struiksma, M. E., Coopmans, P. H. A., & Chen, A. (2023). Processing of grammatical agreement in the face of variation in lexical stress: A mismatch negativity study. Language and Speech, 66(1), 202-213. doi:10.1177/00238309221098116.

    Abstract

    Previous electroencephalography studies have yielded evidence for automatic processing of syntax and lexical stress. However, these studies looked at both effects in isolation, limiting their generalizability to everyday language comprehension. In the current study, we investigated automatic processing of grammatical agreement in the face of variation in lexical stress. Using an oddball paradigm, we measured the Mismatch Negativity (MMN) in Dutch-speaking participants while they listened to Dutch subject–verb sequences (linguistic context) or acoustically similar sequences in which the subject was replaced by filtered noise (nonlinguistic context). The verb forms differed in the inflectional suffix, rendering the subject–verb sequences grammatically correct or incorrect, and leading to a difference in the stress pattern of the verb forms. We found that the MMNs were modulated in both the linguistic and nonlinguistic condition, suggesting that the processing load induced by variation in lexical stress can hinder early automatic processing of grammatical agreement. However, as the morphological differences between the verb forms correlated with differences in number of syllables, an interpretation in terms of the prosodic structure of the sequences cannot be ruled out. Future research is needed to determine which of these factors (i.e., lexical stress, syllabic structure) most strongly modulate early syntactic processing.

    Additional information

    supplementary material
  • Coopmans, C. W., Mai, A., Slaats, S., Weissbart, H., & Martin, A. E. (2023). What oscillations can do for syntax depends on your theory of structure building. Nature Reviews Neuroscience, 24, 723. doi:10.1038/s41583-023-00734-5.
  • Coopmans, C. W., Kaushik, K., & Martin, A. E. (2023). Hierarchical structure in language and action: A formal comparison. Psychological Review, 130(4), 935-952. doi:10.1037/rev0000429.

    Abstract

    Since the cognitive revolution, language and action have been compared as cognitive systems, with cross-domain convergent views recently gaining renewed interest in biology, neuroscience, and cognitive science. Language and action are both combinatorial systems whose mode of combination has been argued to be hierarchical, combining elements into constituents of increasingly larger size. This structural similarity has led to the suggestion that they rely on shared cognitive and neural resources. In this article, we compare the conceptual and formal properties of hierarchy in language and action using set theory. We show that the strong compositionality of language requires a particular formalism, a magma, to describe the algebraic structure corresponding to the set of hierarchical structures underlying sentences. When this formalism is applied to actions, it appears to be both too strong and too weak. To overcome these limitations, which are related to the weak compositionality and sequential nature of action structures, we formalize the algebraic structure corresponding to the set of actions as a trace monoid. We aim to capture the different system properties of language and action in terms of the distinction between hierarchical sets and hierarchical sequences and discuss the implications for the way both systems could be represented in the brain.
  • Corps, R. E., Liao, M., & Pickering, M. J. (2023). Evidence for two stages of prediction in non-native speakers: A visual-world eye-tracking study. Bilingualism: Language and Cognition, 26(1), 231-243. doi:10.1017/S1366728922000499.

    Abstract

    Comprehenders predict what a speaker is likely to say when listening to non-native (L2) and native (L1) utterances. But what are the characteristics of L2 prediction, and how does it relate to L1 prediction? We addressed this question in a visual-world eye-tracking experiment, which tested when L2 English comprehenders integrated perspective into their predictions. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress; distractor: hairdryer) objects. Participants predicted associatively, fixating objects semantically associated with critical verbs (here, the tie and the dress). They also predicted stereotypically consistent objects (e.g., the tie rather than the dress, given the male speaker). Consistent predictions were made later than associative predictions, and were delayed for L2 speakers relative to L1 speakers. These findings suggest prediction involves both automatic and non-automatic stages.
  • Corps, R. E., & Pickering, M. (2023). Response planning during question-answering: Does deciding what to say involve deciding how to say it? Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-023-02382-3.

    Abstract

    To answer a question, speakers must determine their response and formulate it in words. But do they decide on a response before formulation, or do they formulate different potential answers before selecting one? We addressed this issue in a verbal question-answering experiment. Participants answered questions more quickly when they had one potential answer (e.g., Which tourist attraction in Paris is very tall?) than when they had multiple potential answers (e.g., What is the name of a Shakespeare play?). Participants also answered more quickly when the set of potential answers were on average short rather than long, regardless of whether there was only one or multiple potential answers. Thus, participants were not affected by the linguistic complexity of unselected but plausible answers. These findings suggest that participants select a single answer before formulation.
  • Corps, R. E., & Meyer, A. S. (2023). Word frequency has similar effects in picture naming and gender decision: A failure to replicate Jescheniak and Levelt (1994). Acta Psychologica, 241: 104073. doi:10.1016/j.actpsy.2023.104073.

    Abstract

    Word frequency plays a key role in theories of lexical access, which assume that the word frequency effect (WFE, faster access to high-frequency than low-frequency words) occurs as a result of differences in the representation and processing of the words. In a seminal paper, Jescheniak and Levelt (1994) proposed that the WFE arises during the retrieval of word forms, rather than the retrieval of their syntactic representations (their lemmas) or articulatory commands. An important part of Jescheniak and Levelt's argument was that they found a stable WFE in a picture naming task, which requires complete lexical access, but not in a gender decision task, which only requires access to the words' lemmas and not their word forms. We report two attempts to replicate this pattern, one with new materials, and one with Jescheniak and Levelt's orginal pictures. In both studies we found a strong WFE when the pictures were shown for the first time, but much weaker effects on their second and third presentation. Importantly these patterns were seen in both the picture naming and the gender decision tasks, suggesting that either word frequency does not exclusively affect word form retrieval, or that the gender decision task does not exclusively tap lemma access.

    Additional information

    raw data and analysis scripts
  • Corps, R. E., Yang, F., & Pickering, M. (2023). Evidence against egocentric prediction during language comprehension. Royal Society Open Science, 10(12): 231252. doi:10.1098/rsos.231252.

    Abstract

    Although previous research has demonstrated that language comprehension can be egocentric, there is little evidence for egocentricity during prediction. In particular, comprehenders do not appear to predict egocentrically when the context makes it clear what the speaker is likely to refer to. But do comprehenders predict egocentrically when the context does not make it clear? We tested this hypothesis using a visual-world eye-tracking paradigm, in which participants heard sentences containing the gender-neutral pronoun They (e.g. They would like to wear…) while viewing four objects (e.g. tie, dress, drill, hairdryer). Two of these objects were plausible targets of the verb (tie and dress), and one was stereotypically compatible with the participant's gender (tie if the participant was male; dress if the participant was female). Participants rapidly fixated targets more than distractors, but there was no evidence that participants ever predicted egocentrically, fixating objects stereotypically compatible with their own gender. These findings suggest that participants do not fall back on their own egocentric perspective when predicting, even when they know that context does not make it clear what the speaker is likely to refer to.
  • Corradi, Z., Khan, M., Hitti-Malin, R., Mishra, K., Whelan, L., Cornelis, S. S., ABCA4-Study Group, Hoyng, C. B., Kämpjärvi, K., Klaver, C. C. W., Liskova, P., Stohr, H., Weber, B. H. F., Banfi, S., Farrar, G. J., Sharon, D., Zernant, J., Allikmets, R., Dhaenens, C.-M., & Cremers, F. P. M. (2023). Targeted sequencing and in vitro splice assays shed light on ABCA4-associated retinopathies missing heritability. Human Genetics and Genomics Advances, 4(4): 100237. doi:10.1016/j.xhgg.2023.100237.

    Abstract

    The ABCA4 gene is the most frequently mutated Mendelian retinopathy-associated gene. Biallelic variants lead to a variety of phenotypes, however, for thousands of cases the underlying variants remain unknown. Here, we aim to shed further light on the missing heritability of ABCA4-associated retinopathy by analyzing a large cohort of macular dystrophy probands. A total of 858 probands were collected from 26 centers, of whom 722 carried no or one pathogenic ABCA4 variant while 136 cases carried two ABCA4 alleles, one of which was a frequent mild variant, suggesting that deep-intronic variants (DIVs) or other cis-modifiers might have been missed. After single molecule molecular inversion probes (smMIPs)-based sequencing of the complete 128-kb ABCA4 locus, the effect of putative splice variants was assessed in vitro by midigene splice assays in HEK293T cells. The breakpoints of copy number variants (CNVs) were determined by junction PCR and Sanger sequencing. ABCA4 sequence analysis solved 207/520 (39.8%) naïve or unsolved cases and 70/202 (34.7%) monoallelic cases, while additional causal variants were identified in 54/136 (39.7%) of probands carrying two variants. Seven novel DIVs and six novel non-canonical splice site variants were detected in a total of 35 alleles and characterized, including the c.6283-321C>G variant leading to a complex splicing defect. Additionally, four novel CNVs were identified and characterized in five alleles. These results confirm that smMIPs-based sequencing of the complete ABCA4 gene provides a cost-effective method to genetically solve retinopathy cases and that several rare structural and splice altering defects remain undiscovered in STGD1 cases.
  • Coventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D. and 25 moreCoventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D., Pizzuto, G., Serhan, B., Apse, L., Hesse, F., Hoang, L., Hoang, P., Igari, Y., Kapiley, K., Haupt-Khutsishvili, T., Kolding, S., Priiki, K., Mačiukaitytė, I., Mohite, V., Nahkola, T., Tsoi, S. Y., Williams, S., Yasuda, S., Cangelosi, A., Duñabeitia, J. A., Mishra, R. K., Rocca, R., Šķilters, J., Wallentin, M., Žilinskaitė-Šinkūnienė, E., & Incel, O. D. (2023). Spatial communication systems across languages reflect universal action constraints. Nature Human Behaviour, 77, 2099-2110. doi:10.1038/s41562-023-01697-4.

    Abstract

    The extent to which languages share properties reflecting the non-linguistic constraints of the speakers who speak them is key to the debate regarding the relationship between language and cognition. A critical case is spatial communication, where it has been argued that semantic universals should exist, if anywhere. Here, using an experimental paradigm able to separate variation within a language from variation between languages, we tested the use of spatial demonstratives—the most fundamental and frequent spatial terms across languages. In n = 874 speakers across 29 languages, we show that speakers of all tested languages use spatial demonstratives as a function of being able to reach or act on an object being referred to. In some languages, the position of the addressee is also relevant in selecting between demonstrative forms. Commonalities and differences across languages in spatial communication can be understood in terms of universal constraints on action shaping spatial language and cognition.
  • Cox, C., Bergmann, C., Fowler, E., Keren-Portnoy, T., Roepstorff, A., Bryant, G., & Fusaroli, R. (2023). A systematic review and Bayesian meta-analysis of the acoustic features of infant-directed speech. Nature Human Behaviour, 7, 114-133. doi:10.1038/s41562-022-01452-1.

    Abstract

    When speaking to infants, adults often produce speech that differs systematically from that directed to other adults. In order to quantify the acoustic properties of this speech style across a wide variety of languages and cultures, we extracted results from empirical studies on the acoustic features of infant-directed speech (IDS). We analyzed data from 88 unique studies (734 effect sizes) on the following five acoustic parameters that have been systematically examined in the literature: i) fundamental frequency (fo), ii) fo variability, iii) vowel space area, iv) articulation rate, and v) vowel duration. Moderator analyses were conducted in hierarchical Bayesian robust regression models in order to examine how these features change with infant age and differ across languages, experimental tasks and recording environments. The moderator analyses indicated that fo, articulation rate, and vowel duration became more similar to adult-directed speech (ADS) over time, whereas fo variability and vowel space area exhibited stability throughout development. These results point the way for future research to disentangle different accounts of the functions and learnability of IDS by conducting theory-driven comparisons among different languages and using computational models to formulate testable predictions.

    Additional information

    supplementary information
  • Cozijn, R., Vonk, W., & Noordman, L. G. M. (2003). Afleidingen uit oogbewegingen: De invloed van het connectief 'omdat' op het maken van causale inferenties. Gramma/TTT, 9, 141-156.
  • Croijmans, I. (2016). Gelukkig kunnen we erover praten: Over de kunst om geuren en smaken in woorden te omschrijven. koffieTcacao, 17, 80-81.
  • Croijmans, I., & Majid, A. (2016). Not all flavor expertise is equal: The language of wine and coffee experts. PLoS One, 11(6): e0155845. doi:10.1371/journal.pone.0155845.

    Abstract

    People in Western cultures are poor at naming smells and flavors. However, for wine and
    coffee experts, describing smells and flavors is part of their daily routine. So are experts bet-
    ter than lay people at conveying smells and flavors in language? If smells and flavors are
    more easily linguistically expressed by experts, or more

    codable

    , then experts should be
    better than novices at describing smells and flavors. If experts are indeed better, we can
    also ask how general this advantage is: do experts show higher codability only for smells
    and flavors they are expert in (i.e., wine experts for wine and coffee experts for coffee) or is
    their linguistic dexterity more general? To address these questions, wine experts, coffee
    experts, and novices were asked to describe the smell and flavor of wines, coffees, every-
    day odors, and basic tastes. The resulting descriptions were compared on a number of
    measures. We found expertise endows a modest advantage in smell and flavor naming.
    Wine experts showed more consistency in how they described wine smells and flavors than
    coffee experts, and novices; but coffee experts were not more consistent for coffee descriptions. Neither expert group was any more accurate at identifying everyday smells or tastes. Interestingly, both wine and coffee experts tended to use more source-based terms (e.g., vanilla) in descriptions of their own area of expertise whereas novices tended to use more
    evaluative terms (e.g.,nice). However, the overall linguistic strategies for both groups were en par. To conclude, experts only have a limited, domain-specific advantage when communicating about smells and flavors. The ability to communicate about smells and flavors is a matter not only of perceptual training, but specific linguistic training too

    Additional information

    Data availability
  • Cronin, K. A., West, V., & Ross, S. R. (2016). Investigating the Relationship between Welfare and Rearing Young in Captive Chimpanzees (Pan troglodytes). Applied Animal Behaviour Science, 181, 166-172. doi:10.1016/j.applanim.2016.05.014.

    Abstract

    Whether the opportunity to breed and rear young improves the welfare of captive animals is currently debated. However, there is very little empirical data available to evaluate this relationship and this study is a first attempt to contribute objective data to this debate. We utilized the existing variation in the reproductive experiences of sanctuary chimpanzees at Chimfunshi Wildlife Orphanage Trust in Zambia to investigate whether breeding and rearing young was associated with improved welfare for adult females (N = 43). We considered several behavioural welfare indicators, including rates of luxury behaviours and abnormal or stress-related behaviours under normal conditions and conditions inducing social stress. Furthermore, we investigated whether spending time with young was associated with good or poor welfare for adult females, regardless of their kin relationship. We used generalized linear mixed models and found no difference between adult females with and without dependent young on any welfare indices, nor did we find that time spent in proximity to unrelated young predicted welfare (all full-null model comparisons likelihood ratio tests P > 0.05). However, we did find that coprophagy was more prevalent among mother-reared than non-mother-reared individuals, in line with recent work suggesting this behaviour may have a different etiology than other behaviours often considered to be abnormal. In sum, the findings from this initial study lend support to the hypothesis that the opportunity to breed and rear young does not provide a welfare benefit for chimpanzees in captivity. We hope this investigation provides a valuable starting point for empirical study into the welfare implications of managed breeding.

    Additional information

    mmc1.pdf
  • Cutler, A., & Norris, D. (2016). Bottoms up! How top-down pitfalls ensnare speech perception researchers too. Commentary on C. Firestone & B. Scholl: Cognition does not affect perception: Evaluating the evidence for 'top-down' effects. Behavioral and Brain Sciences, e236. doi:10.1017/S0140525X15002745.

    Abstract

    Not only can the pitfalls that Firestone & Scholl (F&S) identify be generalised across multiple studies within the field of visual perception, but also they have general application outside the field wherever perceptual and cognitive processing are compared. We call attention to the widespread susceptibility of research on the perception of speech to versions of the same pitfalls.
  • Cutler, A. (1992). Cross-linguistic differences in speech segmentation. MRC News, 56, 8-9.
  • Cutler, A., & Norris, D. (1992). Detection of vowels and consonants with minimal acoustic variation. Speech Communication, 11, 101-108. doi:10.1016/0167-6393(92)90004-Q.

    Abstract

    Previous research has shown that, in a phoneme detection task, vowels produce longer reaction times than consonants, suggesting that they are harder to perceive. One possible explanation for this difference is based upon their respective acoustic/articulatory characteristics. Another way of accounting for the findings would be to relate them to the differential functioning of vowels and consonants in the syllabic structure of words. In this experiment, we examined the second possibility. Targets were two pairs of phonemes, each containing a vowel and a consonant with similar phonetic characteristics. Subjects heard lists of English words had to press a response key upon detecting the occurrence of a pre-specified target. This time, the phonemes which functioned as vowels in syllabic structure yielded shorter reaction times than those which functioned as consonants. This rules out an explanation for response time difference between vowels and consonants in terms of function in syllable structure. Instead, we propose that consonantal and vocalic segments differ with respect to variability of tokens, both in the acoustic realisation of targets and in the representation of targets by listeners.
  • Cutler, A. (1992). Proceedings with confidence. New Scientist, (1825), 54.
  • Cutler, A., & Norris, D. (1999). Sharpening Ockham’s razor (Commentary on W.J.M. Levelt, A. Roelofs & A.S. Meyer: A theory of lexical access in speech production). Behavioral and Brain Sciences, 22, 40-41.

    Abstract

    Language production and comprehension are intimately interrelated; and models of production and comprehension should, we argue, be constrained by common architectural guidelines. Levelt et al.'s target article adopts as guiding principle Ockham's razor: the best model of production is the simplest one. We recommend adoption of the same principle in comprehension, with consequent simplification of some well-known types of models.

Share this page