Publications

Displaying 301 - 400 of 571
  • Levelt, W. J. M. (2004). Language. In G. Adelman, & B. H. Smith (Eds.), Elsevier's encyclopedia of neuroscience [CD-ROM] (3rd). Amsterdam: Elsevier.
  • Levelt, W. J. M. (1995). Hoezo 'neuro'? Hoezo 'linguïstisch'? Intermediair, 31(46), 32-37.
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M. (1995). Psycholinguistics. In C. C. French, & A. M. Colman (Eds.), Cognitive psychology (reprint, pp. 39- 57). London: Longman.
  • Levelt, W. J. M. (1995). The ability to speak: From intentions to spoken words. European Review, 3(1), 13-23. doi:10.1017/S1062798700001290.

    Abstract

    In recent decades, psychologists have become increasingly interested in our ability to speak. This paper sketches the present theoretical perspective on this most complex skill of homo sapiens. The generation of fluent speech is based on the interaction of various processing components. These mechanisms are highly specialized, dedicated to performing specific subroutines, such as retrieving appropriate words, generating morpho-syntactic structure, computing the phonological target shape of syllables, words, phrases and whole utterances, and creating and executing articulatory programmes. As in any complex skill, there is a self-monitoring mechanism that checks the output. These component processes are targets of increasingly sophisticated experimental research, of which this paper presents a few salient examples.
  • Levelt, W. J. M., Schreuder, R., & Hoenkamp, E. (1976). Struktur und Gebrauch von Bewegungsverben. Zeitschrift für Literaturwissenschaft und Linguistik, 6(23/24), 131-152.
  • Levelt, W. J. M., & Kempen, G. (1976). Taal. In J. Michon, E. Eijkman, & L. De Klerk (Eds.), Handboek der Psychonomie (pp. 492-523). Deventer: Van Loghum Slaterus.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levinson, S. C. (1995). 'Logical' Connectives in Natural Language: A First Questionnaire. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 61-69). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513476.

    Abstract

    It has been hypothesised that human reasoning has a non-linguistic foundation, but is nevertheless influenced by the formal means available in a language. For example, Western logic is transparently related to European sentential connectives (e.g., and, if … then, or, not), some of which cannot be unambiguously expressed in other languages. The questionnaire explores reasoning tools and practices through investigating translation equivalents of English sentential connectives and collecting examples of “reasoned arguments”.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (2004). Deixis. In L. Horn (Ed.), The handbook of pragmatics (pp. 97-121). Oxford: Blackwell.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (1995). Interactional biases in human thinking. In E. N. Goody (Ed.), Social intelligence and interaction (pp. 221-260). Cambridge: Cambridge University Press.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (1995). Three levels of meaning. In F. Palmer (Ed.), Grammar and meaning: Essays in honour of Sir John Lyons (pp. 90-115). Cambridge University Press.
  • Levinson, S. C. (2023). On cognitive artifacts. In R. Feldhay (Ed.), The evolution of knowledge: A scientific meeting in honor of Jürgen Renn (pp. 59-78). Berlin: Max Planck Institute for the History of Science.

    Abstract

    Wearing the hat of a cognitive anthropologist rather than an historian, I will try to amplify the ideas of Renn’s cited above. I argue that a particular subclass of material objects, namely “cognitive artifacts,” involves a close coupling of mind and artifact that acts like a brain prosthesis. Simple cognitive artifacts are external objects that act as aids to internal
    computation, and not all cultures have extended inventories of these. Cognitive artifacts in this sense (e.g., calculating or measuring devices) have clearly played a central role in the history of science. But the notion can be widened to take in less material externalizations of cognition, like writing and language itself. A critical question here is how and why this close coupling of internal computation and external device actually works, a rather neglected question to which I’ll suggest some answers.

    Additional information

    link to book
  • Levinson, S. C. (2023). Gesture, spatial cognition and the evolution of language. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210481. doi:10.1098/rstb.2021.0481.

    Abstract

    Human communication displays a striking contrast between the diversity of languages and the universality of the principles underlying their use in conversation. Despite the importance of this interactional base, it is not obvious that it heavily imprints the structure of languages. However, a deep-time perspective suggests that early hominin communication was gestural, in line with all the other Hominidae. This gestural phase of early language development seems to have left its traces in the way in which spatial concepts, implemented in the hippocampus, provide organizing principles at the heart of grammar.
  • Levshina, N., Namboodiripad, S., Allassonnière-Tang, M., Kramer, M., Talamo, L., Verkerk, A., Wilmoth, S., Garrido Rodriguez, G., Gupton, T. M., Kidd, E., Liu, Z., Naccarato, C., Nordlinger, R., Panova, A., & Stoynova, N. (2023). Why we need a gradient approach to word order. Linguistics, 61(4), 825-883. doi:10.1515/ling-2021-0098.

    Abstract

    This article argues for a gradient approach to word order, which treats word order preferences, both within and across languages, as a continuous variable. Word order variability should be regarded as a basic assumption, rather than as something exceptional. Although this approach follows naturally from the emergentist usage-based view of language, we argue that it can be beneficial for all frameworks and linguistic domains, including language acquisition, processing, typology, language contact, language evolution and change, and formal approaches. Gradient approaches have been very fruitful in some domains, such as language processing, but their potential is not fully realized yet. This may be due to practical reasons. We discuss the most pressing methodological challenges in corpus-based and experimental research of word order and propose some practical solutions.
  • Lewis, A. G., Schoffelen, J.-M., Bastiaansen, M., & Schriefers, H. (2023). Is beta in agreement with the relatives? Using relative clause sentences to investigate MEG beta power dynamics during sentence comprehension. Psychophysiology, 60(10): e14332. doi:10.1111/psyp.14332.

    Abstract

    There remains some debate about whether beta power effects observed during sentence comprehension reflect ongoing syntactic unification operations (beta-syntax hypothesis), or instead reflect maintenance or updating of the sentence-level representation (beta-maintenance hypothesis). In this study, we used magnetoencephalography to investigate beta power neural dynamics while participants read relative clause sentences that were initially ambiguous between a subject- or an object-relative reading. An additional condition included a grammatical violation at the disambiguation point in the relative clause sentences. The beta-maintenance hypothesis predicts a decrease in beta power at the disambiguation point for unexpected (and less preferred) object-relative clause sentences and grammatical violations, as both signal a need to update the sentence-level representation. While the beta-syntax hypothesis also predicts a beta power decrease for grammatical violations due to a disruption of syntactic unification operations, it instead predicts an increase in beta power for the object-relative clause condition because syntactic unification at the point of disambiguation becomes more demanding. We observed decreased beta power for both the agreement violation and object-relative clause conditions in typical left hemisphere language regions, which provides compelling support for the beta-maintenance hypothesis. Mid-frontal theta power effects were also present for grammatical violations and object-relative clause sentences, suggesting that violations and unexpected sentence interpretations are registered as conflicts by the brain's domain-general error detection system.

    Additional information

    data
  • Lindström, E. (2004). Melanesian kinship and culture. In A. Majid (Ed.), Field Manual Volume 9 (pp. 70-73). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1552190.
  • Lingwood, J., Lampropoulou, S., De Bezena, C., Billington, J., & Rowland, C. F. (2023). Children’s engagement and caregivers’ use of language-boosting strategies during shared book reading: A mixed methods approach. Journal of Child Language, 50(6), 1436-1458. doi:10.1017/S0305000922000290.

    Abstract

    For shared book reading to be effective for language development, the adult and child need to be highly engaged. The current paper adopted a mixed-methods approach to investigate caregiver’s language-boosting behaviours and children’s engagement during shared book reading. The results revealed there were more instances of joint attention and caregiver’s use of prompts during moments of higher engagement. However, instances of most language-boosting behaviours were similar across episodes of higher and lower engagement. Qualitative analysis assessing the link between children’s engagement and caregiver’s use of speech acts, revealed that speech acts do seem to contribute to high engagement, in combination with other aspects of the interaction.
  • Liszkowski, U., Carpenter, M., Henning, A., Striano, T., & Tomasello, M. (2004). Twelve-month-olds point to share attention and interest. Developmental Science, 7(3), 297-307. doi:10.1111/j.1467-7687.2004.00349.x.

    Abstract

    Infants point for various motives. Classically, one such motive is declarative, to share attention and interest with adults to events. Recently, some researchers have questioned whether infants have this motivation. In the current study, an adult reacted to 12-month-olds' pointing in different ways, and infants' responses were observed. Results showed that when the adult shared attention and interest (i.e. alternated gaze and emoted), infants pointed more frequently across trials and tended to prolong each point – presumably to prolong the satisfying interaction. However, when the adult emoted to the infant alone or looked only to the event, infants pointed less across trials and repeated points more within trials – presumably in an attempt to establish joint attention. Results suggest that 12-month-olds point declaratively and understand that others have psychological states that can be directed and shared.
  • Loo, S. K., Fisher, S. E., Francks, C., Ogdie, M. N., MacPhie, I. L., Yang, M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2004). Genome-wide scan of reading ability in affected sibling pairs with attention-deficit/hyperactivity disorder: Unique and shared genetic effects. Molecular Psychiatry, 9, 485-493. doi:10.1038/sj.mp.4001450.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD) and reading disability (RD) are common highly heritable disorders of childhood, which frequently co-occur. Data from twin and family studies suggest that this overlap is, in part, due to shared genetic underpinnings. Here, we report the first genome-wide linkage analysis of measures of reading ability in children with ADHD, using a sample of 233 affected sibling pairs who previously participated in a genome-wide scan for susceptibility loci in ADHD. Quantitative trait locus (QTL) analysis of a composite reading factor defined from three highly correlated reading measures identified suggestive linkage (multipoint maximum lod score, MLS>2.2) in four chromosomal regions. Two regions (16p, 17q) overlap those implicated by our previous genome-wide scan for ADHD in the same sample: one region (2p) provides replication for an RD susceptibility locus, and one region (10q) falls approximately 35 cM from a modestly highlighted region in an independent genome-wide scan of siblings with ADHD. Investigation of an individual reading measure of Reading Recognition supported linkage to putative RD susceptibility regions on chromosome 8p (MLS=2.4) and 15q (MLS=1.38). Thus, the data support the existence of genetic factors that have pleiotropic effects on ADHD and reading ability--as suggested by shared linkages on 16p, 17q and possibly 10q--but also those that appear to be unique to reading--as indicated by linkages on 2p, 8p and 15q that coincide with those previously found in studies of RD. Our study also suggests that reading measures may represent useful phenotypes in ADHD research. The eventual identification of genes underlying these unique and shared linkages may increase our understanding of ADHD, RD and the relationship between the two.
  • Lumaca, M., Bonetti, L., Brattico, E., Baggio, G., Ravignani, A., & Vuust, P. (2023). High-fidelity transmission of auditory symbolic material is associated with reduced right–left neuroanatomical asymmetry between primary auditory regions. Cerebral Cortex, 33(11), 6902-6919. doi:10.1093/cercor/bhad009.

    Abstract

    The intergenerational stability of auditory symbolic systems, such as music, is thought to rely on brain processes that allow the faithful transmission of complex sounds. Little is known about the functional and structural aspects of the human brain which support this ability, with a few studies pointing to the bilateral organization of auditory networks as a putative neural substrate. Here, we further tested this hypothesis by examining the role of left–right neuroanatomical asymmetries between auditory cortices. We collected neuroanatomical images from a large sample of participants (nonmusicians) and analyzed them with Freesurfer’s surface-based morphometry method. Weeks after scanning, the same individuals participated in a laboratory experiment that simulated music transmission: the signaling games. We found that high accuracy in the intergenerational transmission of an artificial tone system was associated with reduced rightward asymmetry of cortical thickness in Heschl’s sulcus. Our study suggests that the high-fidelity copying of melodic material may rely on the extent to which computational neuronal resources are distributed across hemispheres. Our data further support the role of interhemispheric brain organization in the cultural transmission and evolution of auditory symbolic systems.
  • Magyari, L. (2004). Nyelv és/vagy evolúció? [Book review]. Magyar Pszichológiai Szemle, 59(4), 591-607. doi:10.1556/MPSzle.59.2004.4.7.

    Abstract

    Nyelv és/vagy evolúció: Lehetséges-e a nyelv evolúciós magyarázata? [Derek Bickerton: Nyelv és evolúció] (Magyari Lilla); Történelmi olvasókönyv az agyról [Charles G. Gross: Agy, látás, emlékezet. Mesék az idegtudomány történetéből] (Garab Edit Anna); Művészet vagy tudomány [Margitay Tihamér: Az érvelés mestersége. Érvelések elemzése, értékelése és kritikája] (Zemplén Gábor); Tényleg ésszerűek vagyunk? [Herbert Simon: Az ésszerűség szerepe az emberi életben] (Kardos Péter); Nemi különbségek a megismerésben [Doreen Kimura: Női agy, férfi agy]. (Hahn Noémi);
  • Majid, A. (2004). Out of context. The Psychologist, 17(6), 330-330.
  • Majid, A. (2004). Data elicitation methods. Language Archive Newsletter, 1(2), 6-6.
  • Majid, A. (2004). Developing clinical understanding. The Psychologist, 17, 386-387.
  • Majid, A. (2004). Coned to perfection. The Psychologist, 17(7), 386-386.
  • Majid, A., Bowerman, M., Kita, S., Haun, D. B. M., & Levinson, S. C. (2004). Can language restructure cognition? The case for space. Trends in Cognitive Sciences, 8(3), 108-114. doi:10.1016/j.tics.2004.01.003.

    Abstract

    Frames of reference are coordinate systems used to compute and specify the location of objects with respect to other objects. These have long been thought of as innate concepts, built into our neurocognition. However, recent work shows that the use of such frames in language, cognition and gesture varies crossculturally, and that children can acquire different systems with comparable ease. We argue that language can play a significant role in structuring, or restructuring, a domain as fundamental as spatial cognition. This suggests we need to rethink the relation between the neurocognitive underpinnings of spatial cognition and the concepts we use in everyday thinking, and, more generally, to work out how to account for cross-cultural cognitive diversity in core cognitive domains.
  • Majid, A. (2004). An integrated view of cognition [Review of the book Rethinking implicit memory ed. by J. S. Bowers and C. J. Marsolek]. The Psychologist, 17(3), 148-149.
  • Majid, A. (2004). [Review of the book The new handbook of language and social psychology ed. by W. Peter Robinson and Howard Giles]. Language and Society, 33(3), 429-433.
  • Mak, M., Faber, M., & Willems, R. M. (2023). Different kinds of simulation during literary reading: Insights from a combined fMRI and eye-tracking study. Cortex, 162, 115-135. doi:10.1016/j.cortex.2023.01.014.

    Abstract

    Mental simulation is an important aspect of narrative reading. In a previous study, we found that gaze durations are differentially impacted by different kinds of mental simulation. Motor simulation, perceptual simulation, and mentalizing as elicited by literary short stories influenced eye movements in distinguishable ways (Mak & Willems, 2019). In the current study, we investigated the existence of a common neural locus for these different kinds of simulation. We additionally investigated whether individual differences during reading, as indexed by the eye movements, are reflected in domain-specific activations in the brain. We found a variety of brain areas activated by simulation-eliciting content, both modality-specific brain areas and a general simulation area. Individual variation in percent signal change in activated areas was related to measures of story appreciation as well as personal characteristics (i.e., transportability, perspective taking). Taken together, these findings suggest that mental simulation is supported by both domain-specific processes grounded in previous experiences, and by the neural mechanisms that underlie higher-order language processing (e.g., situation model building, event indexing, integration).

    Additional information

    figures localizer tasks appendix C1
  • Mamus, E., Speed, L. J., Rissman, L., Majid, A., & Özyürek, A. (2023). Lack of visual experience affects multimodal language production: Evidence from congenitally blind and sighted people. Cognitive Science, 47(1): e13228. doi:10.1111/cogs.13228.

    Abstract

    The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
  • Mamus, E., Speed, L., Özyürek, A., & Majid, A. (2023). The effect of input sensory modality on the multimodal encoding of motion events. Language, Cognition and Neuroscience, 38(5), 711-723. doi:10.1080/23273798.2022.2141282.

    Abstract

    Each sensory modality has different affordances: vision has higher spatial acuity than audition, whereas audition has better temporal acuity. This may have consequences for the encoding of events and its subsequent multimodal language production—an issue that has received relatively little attention to date. In this study, we compared motion events presented as audio-only, visual-only, or multimodal (visual + audio) input and measured speech and co-speech gesture depicting path and manner of motion in Turkish. Input modality affected speech production. Speakers with audio-only input produced more path descriptions and fewer manner descriptions in speech compared to speakers who received visual input. In contrast, the type and frequency of gestures did not change across conditions. Path-only gestures dominated throughout. Our results suggest that while speech is more susceptible to auditory vs. visual input in encoding aspects of motion events, gesture is less sensitive to such differences.

    Additional information

    Supplemental material
  • Mangione-Smith, R., Elliott, M. N., Stivers, T., McDonald, L., Heritage, J., & McGlynn, E. A. (2004). Racial/ethnic variation in parent expectations for antibiotics: Implications for public health campaigns. Pediatrics, 113(5), 385-394.
  • Manhardt, F., Brouwer, S., Van Wijk, E., & Özyürek, A. (2023). Word order preference in sign influences speech in hearing bimodal bilinguals but not vice versa: Evidence from behavior and eye-gaze. Bilingualism: Language and Cognition, 26(1), 48-61. doi:10.1017/S1366728922000311.

    Abstract

    We investigated cross-modal influences between speech and sign in hearing bimodal bilinguals, proficient in a spoken and a sign language, and its consequences on visual attention during message preparation using eye-tracking. We focused on spatial expressions in which sign languages, unlike spoken languages, have a modality-driven preference to mention grounds (big objects) prior to figures (smaller objects). We compared hearing bimodal bilinguals’ spatial expressions and visual attention in Dutch and Dutch Sign Language (N = 18) to those of their hearing non-signing (N = 20) and deaf signing peers (N = 18). In speech, hearing bimodal bilinguals expressed more ground-first descriptions and fixated grounds more than hearing non-signers, showing influence from sign. In sign, they used as many ground-first descriptions as deaf signers and fixated grounds equally often, demonstrating no influence from speech. Cross-linguistic influence of word order preference and visual attention in hearing bimodal bilinguals appears to be one-directional modulated by modality-driven differences.
  • Maskalenka, K., Alagöz, G., Krueger, F., Wright, J., Rostovskaya, M., Nakhuda, A., Bendall, A., Krueger, C., Walker, S., Scally, A., & Rugg-Gunn, P. J. (2023). NANOGP1, a tandem duplicate of NANOG, exhibits partial functional conservation in human naïve pluripotent stem cells. Development, 150(2): dev201155. doi:10.1242/dev.201155.

    Abstract

    Gene duplication events can drive evolution by providing genetic material for new gene functions, and they create opportunities for diverse developmental strategies to emerge between species. To study the contribution of duplicated genes to human early development, we examined the evolution and function of NANOGP1, a tandem duplicate of the transcription factor NANOG. We found that NANOGP1 and NANOG have overlapping but distinct expression profiles, with high NANOGP1 expression restricted to early epiblast cells and naïve-state pluripotent stem cells. Sequence analysis and epitope-tagging revealed that NANOGP1 is protein coding with an intact homeobox domain. The duplication that created NANOGP1 occurred earlier in primate evolution than previously thought and has been retained only in great apes, whereas Old World monkeys have disabled the gene in different ways, including homeodomain point mutations. NANOGP1 is a strong inducer of naïve pluripotency; however, unlike NANOG, it is not required to maintain the undifferentiated status of human naïve pluripotent cells. By retaining expression, sequence and partial functional conservation with its ancestral copy, NANOGP1 exemplifies how gene duplication and subfunctionalisation can contribute to transcription factor activity in human pluripotency and development.
  • Mazzini, S., Holler, J., & Drijvers, L. (2023). Studying naturalistic human communication using dual-EEG and audio-visual recordings. STAR Protocols, 4(3): 102370. doi:10.1016/j.xpro.2023.102370.

    Abstract

    We present a protocol to study naturalistic human communication using dual-EEG and audio-visual recordings. We describe preparatory steps for data collection including setup preparation, experiment design, and piloting. We then describe the data collection process in detail which consists of participant recruitment, experiment room preparation, and data collection. We also outline the kinds of research questions that can be addressed with the current protocol, including several analysis possibilities, from conversational to advanced time-frequency analyses.
    For complete details on the use and execution of this protocol, please refer to Drijvers and Holler (2022).
  • McConnell, K. (2023). Individual Differences in Holistic and Compositional Language Processing. Journal of Cognition, 6. doi:10.5334/joc.283.

    Abstract

    Individual differences in cognitive abilities are ubiquitous across the spectrum of proficient language users. Although speakers differ with regard to their memory capacity, ability for inhibiting distraction, and ability to shift between different processing levels, comprehension is generally successful. However, this does not mean it is identical across individuals; listeners and readers may rely on different processing strategies to exploit distributional information in the service of efficient understanding. In the following psycholinguistic reading experiment, we investigate potential sources of individual differences in the processing of co-occurring words. Participants read modifier-noun bigrams like absolute silence in a self-paced reading task. Backward transition probability (BTP) between the two lexemes was used to quantify the prominence of the bigram as a whole in comparison to the frequency of its parts. Of five individual difference measures (processing speed, verbal working memory, cognitive inhibition, global-local scope shifting, and personality), two proved to be significantly associated with the effect of BTP on reading times. Participants who could inhibit a distracting global environment in order to more efficiently retrieve a single part and those that preferred the local level in the shifting task showed greater effects of the co-occurrence probability of the parts. We conclude that some participants are more likely to retrieve bigrams via their parts and their co-occurrence statistics whereas others more readily retrieve the two words together as a single chunked unit.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McLean, B., Dunn, M., & Dingemanse, M. (2023). Two measures are better than one: Combining iconicity ratings and guessing experiments for a more nuanced picture of iconicity in the lexicon. Language and Cognition, 15(4), 719-739. doi:10.1017/langcog.2023.9.

    Abstract

    Iconicity in language is receiving increased attention from many fields, but our understanding of iconicity is only as good as the measures we use to quantify it. We collected iconicity measures for 304 Japanese words from English-speaking participants, using rating and guessing tasks. The words included ideophones (structurally marked depictive words) along with regular lexical items from similar semantic domains (e.g., fuwafuwa ‘fluffy’, jawarakai ‘soft’). The two measures correlated, speaking to their validity. However, ideophones received consistently higher iconicity ratings than other items, even when guessed at the same accuracies, suggesting the rating task is more sensitive to cues like structural markedness that frame words as iconic. These cues did not always guide participants to the meanings of ideophones in the guessing task, but they did make them more confident in their guesses, even when they were wrong. Consistently poor guessing results reflect the role different experiences play in shaping construals of iconicity. Using multiple measures in tandem allows us to explore the interplay between iconicity and these external factors. To facilitate this, we introduce a reproducible workflow for creating rating and guessing tasks from standardised wordlists, while also making improvements to the robustness, sensitivity and discriminability of previous approaches.
  • McQueen, J. M., Cutler, A., Briscoe, T., & Norris, D. (1995). Models of continuous speech recognition and the contents of the vocabulary. Language and Cognitive Processes, 10, 309-331. doi:10.1080/01690969508407098.

    Abstract

    Several models of spoken word recognition postulate that recognition is achieved via a process of competition between lexical hypotheses. Competition not only provides a mechanism for isolated word recognition, it also assists in continuous speech recognition, since it offers a means of segmenting continuous input into individual words. We present statistics on the pattern of occurrence of words embedded in the polysyllabic words of the English vocabulary, showing that an overwhelming majority (84%) of polysyllables have shorter words embedded within them. Positional analyses show that these embeddings are most common at the onsets of the longer word. Although both phonological and syntactic constraints could rule out some embedded words, they do not remove the problem. Lexical competition provides a means of dealing with lexical embedding. It is also supported by a growing body of experimental evidence. We present results which indicate that competition operates both between word candidates that begin at the same point in the input and candidates that begin at different points (McQueen, Norris, & Cutler, 1994, Noms, McQueen, & Cutler, in press). We conclude that lexical competition is an essential component in models of continuous speech recognition.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., Jesse, A., & Mitterer, H. (2023). Lexically mediated compensation for coarticulation still as elusive as a white christmash. Cognitive Science: a multidisciplinary journal, 47(9): e13342. doi:10.1111/cogs.13342.

    Abstract

    Luthra, Peraza-Santiago, Beeson, Saltzman, Crinnion, and Magnuson (2021) present data from the lexically mediated compensation for coarticulation paradigm that they claim provides conclusive evidence in favor of top-down processing in speech perception. We argue here that this evidence does not support that conclusion. The findings are open to alternative explanations, and we give data in support of one of them (that there is an acoustic confound in the materials). Lexically mediated compensation for coarticulation thus remains elusive, while prior data from the paradigm instead challenge the idea that there is top-down processing in online speech recognition.

    Additional information

    supplementary materials
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2004). Naming analog clocks conceptually facilitates naming digital clocks. Brain and Language, 90(1-3), 434-440. doi:10.1016/S0093-934X(03)00454-1.

    Abstract

    This study investigates how speakers of Dutch compute and produce relative time expressions. Naming digital clocks (e.g., 2:45, say ‘‘quarter to three’’) requires conceptual operations on the minute and hour information for the correct relative time expression. The interplay of these conceptual operations was investigated using a repetition priming paradigm. Participants named analog clocks (the primes) directly before naming digital clocks (the targets). The targets referred to the hour (e.g., 2:00), half past the hour (e.g., 2:30), or the coming hour (e.g., 2:45). The primes differed from the target in one or two hour and in five or ten minutes. Digital clock naming latencies were shorter with a five- than with a ten-min difference between prime and target, but the difference in hour had no effect. Moreover, the distance in minutes had only an effect for half past the hour and the coming hour, but not for the hour. These findings suggest that conceptual facilitation occurs when conceptual transformations are shared between prime and target in telling time.
  • Melinger, A., & Levelt, W. J. M. (2004). Gesture and the communicative intention of the speaker. Gesture, 4(2), 119-141.

    Abstract

    This paper aims to determine whether iconic tracing gestures produced while speaking constitute part of the speaker’s communicative intention. We used a picture description task in which speakers must communicate the spatial and color information of each picture to an interlocutor. By establishing the necessary minimal content of an intended message, we determined whether speech produced with concurrent gestures is less explicit than speech without gestures. We argue that a gesture must be communicatively intended if it expresses necessary information that was nevertheless omitted from speech. We found that speakers who produced iconic gestures representing spatial relations omitted more required spatial information from their descriptions than speakers who did not gesture. These results provide evidence that speakers intend these gestures to communicate. The results have implications for the cognitive architectures that underlie the production of gesture and speech.
  • Meulenbroek, O., Petersson, K. M., Voermans, N., Weber, B., & Fernández, G. (2004). Age differences in neural correlates of route encoding and route recognition. Neuroimage, 22, 1503-1514. doi:10.1016/j.neuroimage.2004.04.007.

    Abstract

    Spatial memory deficits are core features of aging-related changes in cognitive abilities. The neural correlates of these deficits are largely unknown. In the present study, we investigated the neural underpinnings of age-related differences in spatial memory by functional MRI using a navigational memory task with route encoding and route recognition conditions. We investigated 20 healthy young (18 - 29 years old) and 20 healthy old adults (53 - 78 years old) in a random effects analysis. Old subjects showed slightly poorer performance than young subjects. Compared to the control condition, route encoding and route recognition showed activation of the dorsal and ventral visual processing streams and the frontal eye fields in both groups of subjects. Compared to old adults, young subjects showed during route encoding stronger activations in the dorsal and the ventral visual processing stream (supramarginal gyrus and posterior fusiform/parahippocampal areas). In addition, young subjects showed weaker anterior parahippocampal activity during route recognition compared to the old group. In contrast, old compared to young subjects showed less suppressed activity in the left perisylvian region and the anterior cingulate cortex during route encoding. Our findings suggest that agerelated navigational memory deficits might be caused by less effective route encoding based on reduced posterior fusiform/parahippocampal and parietal functionality combined with diminished inhibition of perisylvian and anterior cingulate cortices correlated with less effective suppression of task-irrelevant information. In contrast, age differences in neural correlates of route recognition seem to be rather subtle. Old subjects might show a diminished familiarity signal during route recognition in the anterior parahippocampal region.
  • Meyer, A. S., Van der Meulen, F. F., & Brooks, A. (2004). Eye movements during speech planning: Talking about present and remembered objects. Visual Cognition, 11, 553-576. doi:10.1080/13506280344000248.

    Abstract

    Earlier work has shown that speakers naming several objects usually look at each of them before naming them (e.g., Meyer, Sleiderink, & Levelt, 1998). In the present study, participants saw pictures and described them in utterances such as "The chair next to the cross is brown", where the colour of the first object was mentioned after another object had been mentioned. In Experiment 1, we examined whether the speakers would look at the first object (the chair) only once, before naming the object, or twice (before naming the object and before naming its colour). In Experiment 2, we examined whether speakers about to name the colour of the object would look at the object region again when the colour or the entire object had been removed while they were looking elsewhere. We found that speakers usually looked at the target object again before naming its colour, even when the colour was not displayed any more. Speakers were much less likely to fixate upon the target region when the object had been removed from view. We propose that the object contours may serve as a memory cue supporting the retrieval of the associated colour information. The results show that a speaker's eye movements in a picture description task, far from being random, depend on the available visual information and the content and structure of the planned utterance.
  • Meyer, A. S. (2004). The use of eye tracking in studies of sentence generation. In J. M. Henderson, & F. Ferreira (Eds.), The interface of language, vision, and action: Eye movements and the visual world (pp. 191-212). Hove: Psychology Press.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Meyer, A. S. (2023). Timing in conversation. Journal of Cognition, 6(1), 1-17. doi:10.5334/joc.268.

    Abstract

    Turn-taking in everyday conversation is fast, with median latencies in corpora of conversational speech often reported to be under 300 ms. This seems like magic, given that experimental research on speech planning has shown that speakers need much more time to plan and produce even the shortest of utterances. This paper reviews how language scientists have combined linguistic analyses of conversations and experimental work to understand the skill of swift turn-taking and proposes a tentative solution to the riddle of fast turn-taking.
  • Mickan, A., McQueen, J. M., Brehm, L., & Lemhöfer, K. (2023). Individual differences in foreign language attrition: A 6-month longitudinal investigation after a study abroad. Language, Cognition and Neuroscience, 38(1), 11-39. doi:10.1080/23273798.2022.2074479.

    Abstract

    While recent laboratory studies suggest that the use of competing languages is a driving force in foreign language (FL) attrition (i.e. forgetting), research on “real” attriters has failed to demonstrate
    such a relationship. We addressed this issue in a large-scale longitudinal study, following German students throughout a study abroad in Spain and their first six months back in Germany. Monthly,
    percentage-based frequency of use measures enabled a fine-grained description of language use.
    L3 Spanish forgetting rates were indeed predicted by the quantity and quality of Spanish use, and
    correlated negatively with L1 German and positively with L2 English letter fluency. Attrition rates
    were furthermore influenced by prior Spanish proficiency, but not by motivation to maintain
    Spanish or non-verbal long-term memory capacity. Overall, this study highlights the importance
    of language use for FL retention and sheds light on the complex interplay between language
    use and other determinants of attrition.
  • Mishra, C., Offrede, T., Fuchs, S., Mooshammer, C., & Skantze, G. (2023). Does a robot’s gaze aversion affect human gaze aversion? Frontiers in Robotics and AI, 10: 1127626. doi:10.3389/frobt.2023.1127626.

    Abstract

    Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot’s gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot’s lack of gaze aversion.
  • Mishra, C., Verdonschot, R. G., Hagoort, P., & Skantze, G. (2023). Real-time emotion generation in human-robot dialogue using large language models. Frontiers in Robotics and AI, 10: 1271610. doi:10.3389/frobt.2023.1271610.

    Abstract

    Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot’s turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot’s emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.
  • Monaghan, P., Donnelly, S., Alcock, K., Bidgood, A., Cain, K., Durrant, S., Frost, R. L. A., Jago, L. S., Peter, M. S., Pine, J. M., Turnbull, H., & Rowland, C. F. (2023). Learning to generalise but not segment an artificial language at 17 months predicts children’s language skills 3 years later. Cognitive Psychology, 147: 101607. doi:10.1016/j.cogpsych.2023.101607.

    Abstract

    We investigated whether learning an artificial language at 17 months was predictive of children’s natural language vocabulary and grammar skills at 54 months. Children at 17 months listened to an artificial language containing non-adjacent dependencies, and were then tested on their learning to segment and to generalise the structure of the language. At 54 months, children were then tested on a range of standardised natural language tasks that assessed receptive and expressive vocabulary and grammar. A structural equation model demonstrated that learning the artificial language generalisation at 17 months predicted language abilities – a composite of vocabulary and grammar skills – at 54 months, whereas artificial language segmentation at 17 months did not predict language abilities at this age. Artificial language learning tasks – especially those that probe grammar learning – provide a valuable tool for uncovering the mechanisms driving children’s early language development.

    Additional information

    supplementary data
  • Morison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T. and 3 moreMorison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T., Fisher, S. E., Zweier, C., & Morgan, A. T. (2023). In-depth characterisation of a cohort of individuals with missense and loss-of-function variants disrupting FOXP2. Journal of Medical Genetics, 60(6), 597-607. doi:10.1136/jmg-2022-108734.

    Abstract

    Background
    Heterozygous disruptions of FOXP2 were the first identified molecular cause for severe speech disorder; childhood apraxia of speech (CAS), yet few cases have been reported, limiting knowledge of the condition.

    Methods
    Here we phenotyped 29 individuals from 18 families with pathogenic FOXP2-only variants (13 loss-of-function, 5 missense variants; 14 males; aged 2 years to 62 years). Health and development (cognitive, motor, social domains) was examined, including speech and language outcomes with the first cross-linguistic analysis of English and German.

    Results
    Speech disorders were prevalent (24/26, 92%) and CAS was most common (23/26, 89%), with similar speech presentations across English and German. Speech was still impaired in adulthood and some speech sounds (e.g. ‘th’, ‘r’, ‘ch’, ‘j’) were never acquired. Language impairments (22/26, 85%) ranged from mild to severe. Comorbidities included feeding difficulties in infancy (10/27, 37%), fine (14/27, 52%) and gross (14/27, 52%) motor impairment, anxiety (6/28, 21%), depression (7/28, 25%), and sleep disturbance (11/15, 44%). Physical features were common (23/28, 82%) but with no consistent pattern. Cognition ranged from average to mildly impaired, and was incongruent with language ability; for example, seven participants with severe language disorder had average non-verbal cognition.

    Conclusions
    Although we identify increased prevalence of conditions like anxiety, depression and sleep disturbance, we confirm that the consequences of FOXP2 dysfunction remain relatively specific to speech disorder, as compared to other recently identified monogenic conditions associated with CAS. Thus, our findings reinforce that FOXP2 provides a valuable entrypoint for examining the neurobiological bases of speech disorder.
  • Moscoso del Prado Martín, F., Kostic, A., & Baayen, R. H. (2004). Putting the bits together: An information theoretical perspective on morphological processing. Cognition, 94(1), 1-18. doi:10.1016/j.cognition.2003.10.015.

    Abstract

    In this study we introduce an information-theoretical formulation of the emergence of type- and token-based effects in morphological processing. We describe a probabilistic measure of the informational complexity of a word, its information residual, which encompasses the combined influences of the amount of information contained by the target word and the amount of information carried by its nested morphological paradigms. By means of re-analyses of previously published data on Dutch words we show that the information residual outperforms the combination of traditional token- and type-based counts in predicting response latencies in visual lexical decision, and at the same time provides a parsimonious account of inflectional, derivational, and compounding processes.
  • Moscoso del Prado Martín, F., Ernestus, M., & Baayen, R. H. (2004). Do type and token effects reflect different mechanisms? Connectionist modeling of Dutch past-tense formation and final devoicing. Brain and Language, 90(1-3), 287-298. doi:10.1016/j.bandl.2003.12.002.

    Abstract

    In this paper, we show that both token and type-based effects in lexical processing can result from a single, token-based, system, and therefore, do not necessarily reflect different levels of processing. We report three Simple Recurrent Networks modeling Dutch past-tense formation. These networks show token-based frequency effects and type-based analogical effects closely matching the behavior of human participants when producing past-tense forms for both existing verbs and pseudo-verbs. The third network covers the full vocabulary of Dutch, without imposing predefined linguistic structure on the input or output words.
  • Moscoso del Prado Martín, F., Bertram, R., Haikio, T., Schreuder, R., & Baayen, R. H. (2004). Morphological family size in a morphologically rich language: The case of Finnish compared to Dutch and Hebrew. Journal of Experimental Psychology: Learning, Memory and Cognition, 30(6), 1271-1278. doi:10.1037/0278-7393.30.6.1271.

    Abstract

    Finnish has a very productive morphology in which a stem can give rise to several thousand words. This study presents a visual lexical decision experiment addressing the processing consequences of the huge productivity of Finnish morphology. The authors observed that in Finnish words with larger morphological families elicited shorter response latencies. However, in contrast to Dutch and Hebrew, it is not the complete morphological family of a complex Finnish word that codetermines response latencies but only the subset of words directly derived from the complex word itself. Comparisons with parallel experiments using translation equivalents in Dutch and Hebrew showed substantial cross-language predictivity of family size between Finnish and Dutch but not between Finnish and Hebrew, reflecting the different ways in which the Hebrew and Finnish morphological systems contribute to the semantic organization of concepts in the mental lexicon.
  • Muhinyi, A., & Rowland, C. F. (2023). Contributions of abstract extratextual talk and interactive style to preschoolers’ vocabulary development. Journal of Child Language, 50(1), 198-213. doi:10.1017/S0305000921000696.

    Abstract

    Caregiver abstract talk during shared reading predicts preschool-age children’s vocabulary development. However, previous research has focused on level of abstraction with less consideration of the style of extratextual talk. Here, we investigated the relation between these two dimensions of extratextual talk, and their contributions to variance in children’s vocabulary skills. Caregiver level of abstraction was associated with an interactive reading style. Controlling for socioeconomic status and child age, high interactivity predicted children’s concurrent vocabulary skills whereas abstraction did not. Controlling for earlier vocabulary skills, neither dimension of the extratextual talk predicted later vocabulary. Theoretical and practical relevance are discussed.
  • Narasimhan, B., Sproat, R., & Kiraz, G. (2004). Schwa-deletion in Hindi text-to-speech synthesis. International Journal of Speech Technology, 7(4), 319-333. doi:10.1023/B:IJST.0000037075.71599.62.

    Abstract

    We describe the phenomenon of schwa-deletion in Hindi and how it is handled in the pronunciation component of a multilingual concatenative text-to-speech system. Each of the consonants in written Hindi is associated with an “inherent” schwa vowel which is not represented in the orthography. For instance, the Hindi word pronounced as [namak] (’salt’) is represented in the orthography using the consonantal characters for [n], [m], and [k]. Two main factors complicate the issue of schwa pronunciation in Hindi. First, not every schwa following a consonant is pronounced within the word. Second, in multimorphemic words, the presence of a morpheme boundary can block schwa deletion where it might otherwise occur. We propose a model for schwa-deletion which combines a general purpose schwa-deletion rule proposed in the linguistics literature (Ohala, 1983), with additional morphological analysis necessitated by the high frequency of compounds in our database. The system is implemented in the framework of finite-state transducer technology.
  • Narasimhan, B., Bowerman, M., Brown, P., Eisenbeiss, S., & Slobin, D. I. (2004). "Putting things in places": Effekte linguisticher Typologie auf die Sprachentwicklung. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellschaft (pp. 659-663). Göttingen: Vandenhoeck & Ruprecht.

    Abstract

    Effekte linguisticher Typologie auf die Sprach-entwicklung. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellsch
  • Neijt, A., Schreuder, R., & Baayen, R. H. (2004). Seven years later: The effect of spelling on interpretation. In L. Cornips, & J. Doetjes (Eds.), Linguistics in the Netherlands 2004 (pp. 134-145). Amsterdam: Benjamins.
  • Newbury, D. F., Cleak, J. D., Banfield, E., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Slonims, V., Baird, G., Bolton, P., Everitt, A., Hennessy, E., Main, M., Helms, P., Kindley, A. D., Hodson, A., Watson, J., O’Hare, A. and 9 moreNewbury, D. F., Cleak, J. D., Banfield, E., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Slonims, V., Baird, G., Bolton, P., Everitt, A., Hennessy, E., Main, M., Helms, P., Kindley, A. D., Hodson, A., Watson, J., O’Hare, A., Cohen, W., Cowie, H., Steel, J., MacLean, A., Seckl, J., Bishop, D. V. M., Simkin, Z., Conti-Ramsden, G., & Pickles, A. (2004). Highly significant linkage to the SLI1 Locus in an expanded sample of Individuals affected by specific language impairment. American Journal of Human Genetics, 74(6), 1225-1238. doi:10.1086/421529.

    Abstract

    Specific language impairment (SLI) is defined as an unexplained failure to acquire normal language skills despite adequate intelligence and opportunity. We have reported elsewhere a full-genome scan in 98 nuclear families affected by this disorder, with the use of three quantitative traits of language ability (the expressive and receptive tests of the Clinical Evaluation of Language Fundamentals and a test of nonsense word repetition). This screen implicated two quantitative trait loci, one on chromosome 16q (SLI1) and a second on chromosome 19q (SLI2). However, a second independent genome screen performed by another group, with the use of parametric linkage analyses in extended pedigrees, found little evidence for the involvement of either of these regions in SLI. To investigate these loci further, we have collected a second sample, consisting of 86 families (367 individuals, 174 independent sib pairs), all with probands whose language skills are ⩾1.5 SD below the mean for their age. Haseman-Elston linkage analysis resulted in a maximum LOD score (MLS) of 2.84 on chromosome 16 and an MLS of 2.31 on chromosome 19, both of which represent significant linkage at the 2% level. Amalgamation of the wave 2 sample with the cohort used for the genome screen generated a total of 184 families (840 individuals, 393 independent sib pairs). Analysis of linkage within this pooled group strengthened the evidence for linkage at SLI1 and yielded a highly significant LOD score (MLS = 7.46, interval empirical P<.0004). Furthermore, linkage at the same locus was also demonstrated to three reading-related measures (basic reading [MLS = 1.49], spelling [MLS = 2.67], and reading comprehension [MLS = 1.99] subtests of the Wechsler Objectives Reading Dimensions).
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Noordman, L. G. M., & Vonk, W. (1998). Memory-based processing in understanding causal information. Discourse Processes, 191-212. doi:10.1080/01638539809545044.

    Abstract

    The reading process depends both on the text and on the reader. When we read a text, propositions in the current input are matched to propositions in the memory representation of the previous discourse but also to knowledge structures in long‐term memory. Therefore, memory‐based text processing refers both to the bottom‐up processing of the text and to the top‐down activation of the reader's knowledge. In this article, we focus on the role of cognitive structures in the reader's knowledge. We argue that causality is an important category in structuring human knowledge and that this property has consequences for text processing. Some research is discussed that illustrates that the more the information in the text reflects causal categories, the more easily the information is processed.
  • Norris, D., McQueen, J. M., & Cutler, A. (1995). Competition and segmentation in spoken word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1209-1228.

    Abstract

    Spoken utterances contain few reliable cues to word boundaries, but listeners nonetheless experience little difficulty identifying words in continuous speech. The authors present data and simulations that suggest that this ability is best accounted for by a model of spoken-word recognition combining competition between alternative lexical candidates and sensitivity to prosodic structure. In a word-spotting experiment, stress pattern effects emerged most clearly when there were many competing lexical candidates for part of the input. Thus, competition between simultaneously active word candidates can modulate the size of prosodic effects, which suggests that spoken-word recognition must be sensitive both to prosodic structure and to the effects of competition. A version of the Shortlist model ( D. G. Norris, 1994b) incorporating the Metrical Segmentation Strategy ( A. Cutler & D. Norris, 1988) accurately simulates the results using a lexicon of more than 25,000 words.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Specific facial signals associate with categories of social actions conveyed through questions. PLoS One, 18(7): e0288104. doi:10.1371/journal.pone.0288104.

    Abstract

    The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker’s intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request “What time is it?”, an invitation “Will you come to my party?” or a criticism “Are you crazy?”). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction.

    Additional information

    supporting information
  • Nota, N., Trujillo, J. P., Jacobs, V., & Holler, J. (2023). Facilitating question identification through natural intensity eyebrow movements in virtual avatars. Scientific Reports, 13: 21295. doi:10.1038/s41598-023-48586-4.

    Abstract

    In conversation, recognizing social actions (similar to ‘speech acts’) early is important to quickly understand the speaker’s intended message and to provide a fast response. Fast turns are typical for fundamental social actions like questions, since a long gap can indicate a dispreferred response. In multimodal face-to-face interaction, visual signals may contribute to this fast dynamic. The face is an important source of visual signalling, and previous research found that prevalent facial signals such as eyebrow movements facilitate the rapid recognition of questions. We aimed to investigate whether early eyebrow movements with natural movement intensities facilitate question identification, and whether specific intensities are more helpful in detecting questions. Participants were instructed to view videos of avatars where the presence of eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) was manipulated, and to indicate whether the utterance in the video was a question or statement. Results showed higher accuracies for questions with eyebrow frowns, and faster response times for questions with eyebrow frowns and eyebrow raises. No additional effect was observed for the specific movement intensity. This suggests that eyebrow movements that are representative of naturalistic multimodal behaviour facilitate question recognition.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Conversational eyebrow frowns facilitate question identification: An online study using virtual avatars. Cognitive Science, 47(12): e13392. doi:10.1111/cogs.13392.

    Abstract

    Conversation is a time-pressured environment. Recognizing a social action (the ‘‘speech act,’’ such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers’ intentions.

    Additional information

    link to preprint
  • Nozais, V., Forkel, S. J., Petit, L., Talozzi, L., Corbetta, M., Thiebaut de Schotten, M., & Joliot, M. (2023). Atlasing white matter and grey matter joint contributions to resting-state networks in the human brain. Communications Biology, 6: 726. doi:10.1038/s42003-023-05107-3.

    Abstract

    Over the past two decades, the study of resting-state functional magnetic resonance imaging has revealed that functional connectivity within and between networks is linked to cognitive states and pathologies. However, the white matter connections supporting this connectivity remain only partially described. We developed a method to jointly map the white and grey matter contributing to each resting-state network (RSN). Using the Human Connectome Project, we generated an atlas of 30 RSNs. The method also highlighted the overlap between networks, which revealed that most of the brain’s white matter (89%) is shared between multiple RSNs, with 16% shared by at least 7 RSNs. These overlaps, especially the existence of regions shared by numerous networks, suggest that white matter lesions in these areas might strongly impact the communication within networks. We provide an atlas and an open-source software to explore the joint contribution of white and grey matter to RSNs and facilitate the study of the impact of white matter damage to these networks. In a first application of the software with clinical data, we were able to link stroke patients and impacted RSNs, showing that their symptoms aligned well with the estimated functions of the networks.
  • Numssen, O., van der Burght, C. L., & Hartwigsen, G. (2023). Revisiting the focality of non-invasive brain stimulation - implications for studies of human cognition. Neuroscience and Biobehavioral Reviews, 149: 105154. doi:10.1016/j.neubiorev.2023.105154.

    Abstract

    Non-invasive brain stimulation techniques are popular tools to investigate brain function in health and disease. Although transcranial magnetic stimulation (TMS) is widely used in cognitive neuroscience research to probe causal structure-function relationships, studies often yield inconclusive results. To improve the effectiveness of TMS studies, we argue that the cognitive neuroscience community needs to revise the stimulation focality principle – the spatial resolution with which TMS can differentially stimulate cortical regions. In the motor domain, TMS can differentiate between cortical muscle representations of adjacent fingers. However, this high degree of spatial specificity cannot be obtained in all cortical regions due to the influences of cortical folding patterns on the TMS-induced electric field. The region-dependent focality of TMS should be assessed a priori to estimate the experimental feasibility. Post-hoc simulations allow modeling of the relationship between cortical stimulation exposure and behavioral modulation by integrating data across stimulation sites or subjects.

    Files private

    Request files
  • O'Brien, D. P., & Bowerman, M. (1998). Martin D. S. Braine (1926–1996): Obituary. American Psychologist, 53, 563. doi:10.1037/0003-066X.53.5.563.

    Abstract

    Memorializes Martin D. S. Braine, whose research on child language acquisition and on both child and adult thinking and reasoning had a major influence on modern cognitive psychology. Addressing meaning as well as position, Braine argued that children start acquiring language by learning narrow-scope positional formulas that map components of meaning to positions in the utterance. These proposals were critical in starting discussions of the possible universality of the pivot-grammar stage and of the role of syntax, semantics,and pragmatics in children's early grammar and were pivotal to the rise of approaches in which cognitive development in language acquisition is stressed.
  • O'Connor, L. (2004). Going getting tired: Associated motion through space and time in Lowland Chontal. In M. Achard, & S. Kemmer (Eds.), Language, culture and mind (pp. 181-199). Stanford: CSLI.
  • Ogdie, M. N., Fisher, S. E., Yang, M., Ishii, J., Francks, C., Loo, S. K., Cantor, R. M., McCracken, J. T., McGough, J. J., Smalley, S. L., & Nelson, S. F. (2004). Attention Deficit Hyperactivity Disorder: Fine mapping supports linkage to 5p13, 6q12, 16p13, and 17p11. American Journal of Human Genetics, 75(4), 661-668. doi:10.1086/424387.

    Abstract

    We completed fine mapping of nine positional candidate regions for attention-deficit/hyperactivity disorder (ADHD) in an extended population sample of 308 affected sibling pairs (ASPs), constituting the largest linkage sample of families with ADHD published to date. The candidate chromosomal regions were selected from all three published genomewide scans for ADHD, and fine mapping was done to comprehensively validate these positional candidate regions in our sample. Multipoint maximum LOD score (MLS) analysis yielded significant evidence of linkage on 6q12 (MLS 3.30; empiric P=.024) and 17p11 (MLS 3.63; empiric P=.015), as well as suggestive evidence on 5p13 (MLS 2.55; empiric P=.091). In conjunction with the previously reported significant linkage on the basis of fine mapping 16p13 in the same sample as this report, the analyses presented here indicate that four chromosomal regions—5p13, 6q12, 16p13, and 17p11—are likely to harbor susceptibility genes for ADHD. The refinement of linkage within each of these regions lays the foundation for subsequent investigations using association methods to detect risk genes of moderate effect size.
  • Oliveira‑Stahl, G., Farboud, S., Sterling, M. L., Heckman, J. J., Van Raalte, B., Lenferink, D., Van der Stam, A., Smeets, C. J. L. M., Fisher, S. E., & Englitz, B. (2023). High-precision spatial analysis of mouse courtship vocalization behavior reveals sex and strain differences. Scientific Reports, 13: 5219. doi:10.1038/s41598-023-31554-3.

    Abstract

    Mice display a wide repertoire of vocalizations that varies with sex, strain, and context. Especially during social interaction, including sexually motivated dyadic interaction, mice emit sequences of ultrasonic vocalizations (USVs) of high complexity. As animals of both sexes vocalize, a reliable attribution of USVs to their emitter is essential. The state-of-the-art in sound localization for USVs in 2D allows spatial localization at a resolution of multiple centimeters. However, animals interact at closer ranges, e.g. snout-to-snout. Hence, improved algorithms are required to reliably assign USVs. We present a novel algorithm, SLIM (Sound Localization via Intersecting Manifolds), that achieves a 2–3-fold improvement in accuracy (13.1–14.3 mm) using only 4 microphones and extends to many microphones and localization in 3D. This accuracy allows reliable assignment of 84.3% of all USVs in our dataset. We apply SLIM to courtship interactions between adult C57Bl/6J wildtype mice and those carrying a heterozygous Foxp2 variant (R552H). The improved spatial accuracy reveals that vocalization behavior is dependent on the spatial relation between the interacting mice. Female mice vocalized more in close snout-to-snout interaction while male mice vocalized more when the male snout was in close proximity to the female's ano-genital region. Further, we find that the acoustic properties of the ultrasonic vocalizations (duration, Wiener Entropy, and sound level) are dependent on the spatial relation between the interacting mice as well as on the genotype. In conclusion, the improved attribution of vocalizations to their emitters provides a foundation for better understanding social vocal behaviors.

    Additional information

    supplementary movies and figures
  • Özer, D., Karadöller, D. Z., Özyürek, A., & Göksun, T. (2023). Gestures cued by demonstratives in speech guide listeners' visual attention during spatial language comprehension. Journal of Experimental Psychology: General, 152(9), 2623-2635. doi:10.1037/xge0001402.

    Abstract

    Gestures help speakers and listeners during communication and thinking, particularly for visual-spatial information. Speakers tend to use gestures to complement the accompanying spoken deictic constructions, such as demonstratives, when communicating spatial information (e.g., saying “The candle is here” and gesturing to the right side to express that the candle is on the speaker's right). Visual information conveyed by gestures enhances listeners’ comprehension. Whether and how listeners allocate overt visual attention to gestures in different speech contexts is mostly unknown. We asked if (a) listeners gazed at gestures more when they complement demonstratives in speech (“here”) compared to when they express redundant information to speech (e.g., “right”) and (b) gazing at gestures related to listeners’ information uptake from those gestures. We demonstrated that listeners fixated gestures more when they expressed complementary than redundant information in the accompanying speech. Moreover, overt visual attention to gestures did not predict listeners’ comprehension. These results suggest that the heightened communicative value of gestures as signaled by external cues, such as demonstratives, guides listeners’ visual attention to gestures. However, overt visual attention does not seem to be necessary to extract the cued information from the multimodal message.
  • Papoutsi*, C., Zimianiti*, E., Bosker, H. R., & Frost, R. L. A. (2023). Statistical learning at a virtual cocktail party. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-023-02384-1.

    Abstract

    * These two authors contributed equally to this study
    Statistical learning – the ability to extract distributional regularities from input – is suggested to be key to language acquisition. Yet, evidence for the human capacity for statistical learning comes mainly from studies conducted in carefully controlled settings without auditory distraction. While such conditions permit careful examination of learning, they do not reflect the naturalistic language learning experience, which is replete with auditory distraction – including competing talkers. Here, we examine how statistical language learning proceeds in a virtual cocktail party environment, where the to-be-learned input is presented alongside a competing speech stream with its own distributional regularities. During exposure, participants in the Dual Talker group concurrently heard two novel languages, one produced by a female talker and one by a male talker, with each talker virtually positioned at opposite sides of the listener (left/right) using binaural acoustic manipulations. Selective attention was manipulated by instructing participants to attend to only one of the two talkers. At test, participants were asked to distinguish words from part-words for both the attended and the unattended languages. Results indicated that participants’ accuracy was significantly higher for trials from the attended vs. unattended
    language. Further, the performance of this Dual Talker group was no different compared to a control group who heard only one language from a single talker (Single Talker group). We thus conclude that statistical learning is modulated by selective attention, being relatively robust against the additional cognitive load provided by competing speech, emphasizing its efficiency in naturalistic language learning situations.

    Additional information

    supplementary file
  • Parlatini, V., Itahashi, T., Lee, Y., Liu, S., Nguyen, T. T., Aoki, Y. Y., Forkel, S. J., Catani, M., Rubia, K., Zhou, J. H., Murphy, D. G., & Cortese, S. (2023). White matter alterations in Attention-Deficit/Hyperactivity Disorder (ADHD): a systematic review of 129 diffusion imaging studies with meta-analysis. Molecular Psychiatry, 28, 4098-4123. doi:10.1038/s41380-023-02173-1.

    Abstract

    Aberrant anatomical brain connections in attention-deficit/hyperactivity disorder (ADHD) are reported inconsistently across
    diffusion weighted imaging (DWI) studies. Based on a pre-registered protocol (Prospero: CRD42021259192), we searched PubMed,
    Ovid, and Web of Knowledge until 26/03/2022 to conduct a systematic review of DWI studies. We performed a quality assessment
    based on imaging acquisition, preprocessing, and analysis. Using signed differential mapping, we meta-analyzed a subset of the
    retrieved studies amenable to quantitative evidence synthesis, i.e., tract-based spatial statistics (TBSS) studies, in individuals of any
    age and, separately, in children, adults, and high-quality datasets. Finally, we conducted meta-regressions to test the effect of age,
    sex, and medication-naïvety. We included 129 studies (6739 ADHD participants and 6476 controls), of which 25 TBSS studies
    provided peak coordinates for case-control differences in fractional anisotropy (FA)(32 datasets) and 18 in mean diffusivity (MD)(23
    datasets). The systematic review highlighted white matter alterations (especially reduced FA) in projection, commissural and
    association pathways of individuals with ADHD, which were associated with symptom severity and cognitive deficits. The meta-
    analysis showed a consistent reduced FA in the splenium and body of the corpus callosum, extending to the cingulum. Lower FA
    was related to older age, and case-control differences did not survive in the pediatric meta-analysis. About 68% of studies were of
    low quality, mainly due to acquisitions with non-isotropic voxels or lack of motion correction; and the sensitivity analysis in high-
    quality datasets yielded no significant results. Findings suggest prominent alterations in posterior interhemispheric connections
    subserving cognitive and motor functions affected in ADHD, although these might be influenced by non-optimal acquisition
    parameters/preprocessing. Absence of findings in children may be related to the late development of callosal fibers, which may
    enhance case-control differences in adulthood. Clinicodemographic and methodological differences were major barriers to
    consistency and comparability among studies, and should be addressed in future investigations.
  • Passmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G. and 4 morePassmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G., Ross, R. M., Thomas-Colquhoun, E., Evans, N., & Jordan, F. M. (2023). Kinbank: A global database of kinship terminology. PLOS ONE, 18: e0283218. doi:10.1371/journal.pone.0283218.

    Abstract

    For a single species, human kinship organization is both remarkably diverse and strikingly organized. Kinship terminology is the structured vocabulary used to classify, refer to, and address relatives and family. Diversity in kinship terminology has been analyzed by anthropologists for over 150 years, although recurrent patterning across cultures remains incompletely explained. Despite the wealth of kinship data in the anthropological record, comparative studies of kinship terminology are hindered by data accessibility. Here we present Kinbank, a new database of 210,903 kinterms from a global sample of 1,229 spoken languages. Using open-access and transparent data provenance, Kinbank offers an extensible resource for kinship terminology, enabling researchers to explore the rich diversity of human family organization and to test longstanding hypotheses about the origins and drivers of recurrent patterns. We illustrate our contribution with two examples. We demonstrate strong gender bias in the phonological structure of parent terms across 1,022 languages, and we show that there is no evidence for a coevolutionary relationship between cross-cousin marriage and bifurcate-merging terminology in Bantu languages. Analysing kinship data is notoriously challenging; Kinbank aims to eliminate data accessibility issues from that challenge and provide a platform to build an interdisciplinary understanding of kinship.

    Additional information

    Supporting Information
  • Paulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M. and 5 morePaulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M., Zoonomia Consortium, Rojas, D., Dávalos, L. M., Lindblad-Toh, K., Karlsson, E. K., & Ray, D. A. (2023). Chiropterans are a hotspot for horizontal transfer of DNA transposons in mammalia. Molecular Biology and Evolution, 40(5): msad092. doi:10.1093/molbev/msad092.

    Abstract

    Horizontal transfer of transposable elements (TEs) is an important mechanism contributing to genetic diversity and innovation. Bats (order Chiroptera) have repeatedly been shown to experience horizontal transfer of TEs at what appears to be a high rate compared with other mammals. We investigated the occurrence of horizontally transferred (HT) DNA transposons involving bats. We found over 200 putative HT elements within bats; 16 transposons were shared across distantly related mammalian clades, and 2 other elements were shared with a fish and two lizard species. Our results indicate that bats are a hotspot for horizontal transfer of DNA transposons. These events broadly coincide with the diversification of several bat clades, supporting the hypothesis that DNA transposon invasions have contributed to genetic diversification of bats.

    Additional information

    supplemental methods supplemental tables
  • Pederson, E. (1995). Questionnaire on event realization. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 54-60). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004359.

    Abstract

    "Event realisation" refers to the normal final state of the affected entity of an activity described by a verb. For example, the sentence John killed the mosquito entails that the mosquito is afterwards dead – this is the full realisation of a killing event. By contrast, a sentence such as John hit the mosquito does not entail the mosquito’s death (even though we might assume this to be a likely result). In using a certain verb, which features of event realisation are entailed and which are just likely? This questionnaire supports cross-linguistic exploration of event realisation for a range of event types.
  • Pederson, E., Danziger, E., Wilkins, D. G., Levinson, S. C., Kita, S., & Senft, G. (1998). Semantic typology and spatial conceptualization. Language, 74(3), 557-589. doi:10.2307/417793.
  • Pender, R., Fearon, P., St Pourcain, B., Heron, J., & Mandy, W. (2023). Developmental trajectories of autistic social traits in the general population. Psychological Medicine, 53(3), 814-822. doi:10.1017/S0033291721002166.

    Abstract

    Background

    Autistic people show diverse trajectories of autistic traits over time, a phenomenon labelled ‘chronogeneity’. For example, some show a decrease in symptoms, whilst others experience an intensification of difficulties. Autism spectrum disorder (ASD) is a dimensional condition, representing one end of a trait continuum that extends throughout the population. To date, no studies have investigated chronogeneity across the full range of autistic traits. We investigated the nature and clinical significance of autism trait chronogeneity in a large, general population sample.
    Methods

    Autistic social/communication traits (ASTs) were measured in the Avon Longitudinal Study of Parents and Children using the Social and Communication Disorders Checklist (SCDC) at ages 7, 10, 13 and 16 (N = 9744). We used Growth Mixture Modelling (GMM) to identify groups defined by their AST trajectories. Measures of ASD diagnosis, sex, IQ and mental health (internalising and externalising) were used to investigate external validity of the derived trajectory groups.
    Results

    The selected GMM model identified four AST trajectory groups: (i) Persistent High (2.3% of sample), (ii) Persistent Low (83.5%), (iii) Increasing (7.3%) and (iv) Decreasing (6.9%) trajectories. The Increasing group, in which females were a slight majority (53.2%), showed dramatic increases in SCDC scores during adolescence, accompanied by escalating internalising and externalising difficulties. Two-thirds (63.6%) of the Decreasing group were male.
    Conclusions

    Clinicians should note that for some young people autism-trait-like social difficulties first emerge during adolescence accompanied by problems with mood, anxiety, conduct and attention. A converse, majority-male group shows decreasing social difficulties during adolescence.
  • Pereira Soares, S. M., Chaouch-Orozco, A., & González Alonso, J. (2023). Innovations and challenges in acquisition and processing methodologies for L3/Ln. In J. Cabrelli, A. Chaouch-Orozco, J. González Alonso, S. M. Pereira Soares, E. Puig-Mayenco, & J. Rothman (Eds.), The Cambridge handbook of third language acquisition (pp. 661-682). Cambridge: Cambridge University Press. doi:10.1017/9781108957823.026.

    Abstract

    The advent of psycholinguistic and neurolinguistic methodologies has provided new insights into theories of language acquisition. Sequential multilingualism is no exception, and some of the most recent work on the subject has incorporated a particular focus on language processing. This chapter surveys some of the work on the processing of lexical and morphosyntactic aspects of third or further languages, with different offline and online methodologies. We also discuss how, while increasingly sophisticated techniques and experimental designs have improved our understanding of third language acquisition and processing, simpler but clever designs can answer pressing questions in our theoretical debate. We provide examples of both sophistication and clever simplicity in experimental design, and argue that the field would benefit from incorporating a combination of both concepts into future work.
  • Petersson, K. M. (1998). Comments on a Monte Carlo approach to the analysis of functional neuroimaging data. NeuroImage, 8, 108-112.
  • Petersson, K. M., Forkstam, C., & Ingvar, M. (2004). Artificial syntactic violations activate Broca’s region. Cognitive Science, 28(3), 383-407. doi:10.1207/s15516709cog2803_4.

    Abstract

    In the present study, using event-related functional magnetic resonance imaging, we investigated a group of participants on a grammaticality classification task after they had been exposed to well-formed consonant strings generated from an artificial regular grammar.We used an implicit acquisition paradigm in which the participants were exposed to positive examples. The objective of this studywas to investigate whether brain regions related to language processing overlap with the brain regions activated by the grammaticality classification task used in the present study. Recent meta-analyses of functional neuroimaging studies indicate that syntactic processing is related to the left inferior frontal gyrus (Brodmann's areas 44 and 45) or Broca's region. In the present study, we observed that artificial grammaticality violations activated Broca's region in all participants. This observation lends some support to the suggestions that artificial grammar learning represents a model for investigating aspects of language learning in infants.
  • Petersson, K. M. (2004). The human brain, language, and implicit learning. Impuls, Tidsskrift for psykologi (Norwegian Journal of Psychology), 58(3), 62-72.
  • Petrovic, P., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Brainstem involvement in the initial response to pain. NeuroImage, 22, 995-1005. doi:10.1016/j.neuroimage.2004.01.046.

    Abstract

    The autonomic responses to acute pain exposure usually habituate rapidly while the subjective ratings of pain remain high for more extended periods of time. Thus, systems involved in the autonomic response to painful stimulation, for example the hypothalamus and the brainstem, would be expected to attenuate the response to pain during prolonged stimulation. This suggestion is in line with the hypothesis that the brainstem is specifically involved in the initial response to pain. To probe this hypothesis, we performed a positron emission tomography (PET) study where we scanned subjects during the first and second minute of a prolonged tonic painful cold stimulation (cold pressor test) and nonpainful cold stimulation. Galvanic skin response (GSR) was recorded during the PET scanning as an index of autonomic sympathetic response. In the main effect of pain, we observed increased activity in the thalamus bilaterally, in the contralateral insula and in the contralateral anterior cingulate cortex but no significant increases in activity in the primary or secondary somatosensory cortex. The autonomic response (GSR) decreased with stimulus duration. Concomitant with the autonomic response, increased activity was observed in brainstem and hypothalamus areas during the initial vs. the late stimulation. This effect was significantly stronger for the painful than for the cold stimulation. Activity in the brainstem showed pain-specific covariation with areas involved in pain processing, indicating an interaction between the brainstem and cortical pain networks. The findings indicate that areas in the brainstem are involved in the initial response to noxious stimulation, which is also characterized by an increased sympathetic response.
  • Petrovic, P., Carlsson, K., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Context-dependent deactivation of the amygdala during pain. Journal of Cognitive Neuroscience, 16, 1289-1301.

    Abstract

    The amygdala has been implicated in fundamental functions for the survival of the organism, such as fear and pain. In accord with this, several studies have shown increased amygdala activity during fear conditioning and the processing of fear-relevant material in human subjects. In contrast, functional neuroimaging studies of pain have shown a decreased amygdala activity. It has previously been proposed that the observed deactivations of the amygdala in these studies indicate a cognitive strategy to adapt to a distressful but in the experimental setting unavoidable painful event. In this positron emission tomography study, we show that a simple contextual manipulation, immediately preceding a painful stimulation, that increases the anticipated duration of the painful event leads to a decrease in amygdala activity and modulates the autonomic response during the noxious stimulation. On a behavioral level, 7 of the 10 subjects reported that they used coping strategies more intensely in this context. We suggest that the altered activity in the amygdala may be part of a mechanism to attenuate pain-related stress responses in a context that is perceived as being more aversive. The study also showed an increased activity in the rostral part of anterior cingulate cortex in the same context in which the amygdala activity decreased, further supporting the idea that this part of the cingulate cortex is involved in the modulation of emotional and pain networks
  • Piai, V., & Eikelboom, D. (2023). Brain areas critical for picture naming: A systematic review and meta-analysis of lesion-symptom mapping studies. Neurobiology of Language, 4(2), 280-296. doi:10.1162/nol_a_00097.

    Abstract

    Lesion-symptom mapping (LSM) studies have revealed brain areas critical for naming, typically finding significant associations between damage to left temporal, inferior parietal, and inferior fontal regions and impoverished naming performance. However, specific subregions found in the available literature vary. Hence, the aim of this study was to perform a systematic review and meta-analysis of published lesion-based findings, obtained from studies with unique cohorts investigating brain areas critical for accuracy in naming in stroke patients at least 1 month post-onset. An anatomic likelihood estimation (ALE) meta-analysis of these LSM studies was performed. Ten papers entered the ALE meta-analysis, with similar lesion coverage over left temporal and left inferior frontal areas. This small number is a major limitation of the present study. Clusters were found in left anterior temporal lobe, posterior temporal lobe extending into inferior parietal areas, in line with the arcuate fasciculus, and in pre- and postcentral gyri and middle frontal gyrus. No clusters were found in left inferior frontal gyrus. These results were further substantiated by examining five naming studies that investigated performance beyond global accuracy, corroborating the ALE meta-analysis results. The present review and meta-analysis highlight the involvement of left temporal and inferior parietal cortices in naming, and of mid to posterior portions of the temporal lobe in particular in conceptual-lexical retrieval for speaking.

    Additional information

    data
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1998). Comparing different models of the development of the English verb category. Linguistics, 36(4), 807-830. doi:10.1515/ling.1998.36.4.807.

    Abstract

    In this study data from the first six months of 12 children s multiword speech were used to test the validity of Valian's (1991) syntactic perfor-mance-limitation account and Tomasello s (1992) verb-island account of early multiword speech with particular reference to the development of the English verb category. The results provide evidence for appropriate use of verb morphology, auxiliary verb structures, pronoun case marking, and SVO word order from quite early in development. However, they also demonstrate a great deal of lexical specificity in the children's use of these systems, evidenced by a lack of overlap in the verbs to which different morphological markers were applied, a lack of overlap in the verbs with which different auxiliary verbs were used, a disproportionate use of the first person singular nominative pronoun I, and a lack of overlap in the lexical items that served as the subjects and direct objects of transitive verbs. These findings raise problems for both a syntactic performance-limitation account and a strong verb-island account of the data and suggest the need to develop a more general lexiealist account of early multiword speech that explains why some words come to function as "islands" of organization in the child's grammar and others do not.
  • Poletiek, F. H. (1998). De geest van de jury. Psychologie en Maatschappij, 4, 376-378.
  • Poletiek, F. H., & Stolker, C. J. J. M. (2004). Who decides the worth of an arm and a leg? Assessing the monetary value of nonmonetary damage. In E. Kurz-Milcke, & G. Gigerenzer (Eds.), Experts in science and society (pp. 201-213). New York: Kluwer Academic/Plenum Publishers.
  • Praamstra, P., Stegeman, D. F., Cools, A. R., Meyer, A. S., & Horstink, M. W. I. M. (1998). Evidence for lateral premotor and parietal overactivity in Parkinson's disease during sequential and bimanual movements: A PET study. Brain, 121, 769-772. doi:10.1093/brain/121.4.769.
  • Quaresima, A., Fitz, H., Duarte, R., Van den Broek, D., Hagoort, P., & Petersson, K. M. (2023). The Tripod neuron: A minimal structural reduction of the dendritic tree. The Journal of Physiology, 601(15), 3007-3437. doi:10.1113/JP283399.

    Abstract

    Neuron models with explicit dendritic dynamics have shed light on mechanisms for coincidence detection, pathway selection and temporal filtering. However, it is still unclear which morphological and physiological features are required to capture these phenomena. In this work, we introduce the Tripod neuron model and propose a minimal structural reduction of the dendritic tree that is able to reproduce these computations. The Tripod is a three-compartment model consisting of two segregated passive dendrites and a somatic compartment modelled as an adaptive, exponential integrate-and-fire neuron. It incorporates dendritic geometry, membrane physiology and receptor dynamics as measured in human pyramidal cells. We characterize the response of the Tripod to glutamatergic and GABAergic inputs and identify parameters that support supra-linear integration, coincidence-detection and pathway-specific gating through shunting inhibition. Following NMDA spikes, the Tripod neuron generates plateau potentials whose duration depends on the dendritic length and the strength of synaptic input. When fitted with distal compartments, the Tripod encodes previous activity into a dendritic depolarized state. This dendritic memory allows the neuron to perform temporal binding, and we show that it solves transition and sequence detection tasks on which a single-compartment model fails. Thus, the Tripod can account for dendritic computations previously explained only with more detailed neuron models or neural networks. Due to its simplicity, the Tripod neuron can be used efficiently in simulations of larger cortical circuits.
  • Raghavan, R., Raviv, L., & Peeters, D. (2023). What's your point? Insights from virtual reality on the relation between intention and action in the production of pointing gestures. Cognition, 240: 105581. doi:10.1016/j.cognition.2023.105581.

    Abstract

    Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself.
  • Raimondi, T., Di Panfilo, G., Pasquali, M., Zarantonello, M., Favaro, L., Savini, T., Gamba, M., & Ravignani, A. (2023). Isochrony and rhythmic interaction in ape duetting. Proceedings of the Royal Society B: Biological Sciences, 290: 20222244. doi:10.1098/rspb.2022.2244.

    Abstract

    How did rhythm originate in humans, and other species? One cross-cultural universal, frequently found in human music, is isochrony: when note onsets repeat regularly like the ticking of a clock. Another universal consists in synchrony (e.g. when individuals coordinate their notes so that they are sung at the same time). An approach to biomusicology focuses on similarities and differences across species, trying to build phylogenies of musical traits. Here we test for the presence of, and a link between, isochrony and synchrony in a non-human animal. We focus on the songs of one of the few singing primates, the lar gibbon (Hylobates lar), extracting temporal features from their solo songs and duets. We show that another ape exhibits one rhythmic feature at the core of human musicality: isochrony. We show that an enhanced call rate overall boosts isochrony, suggesting that respiratory physiological constraints play a role in determining the song's rhythmic structure. However, call rate alone cannot explain the flexible isochrony we witness. Isochrony is plastic and modulated depending on the context of emission: gibbons are more isochronous when duetting than singing solo. We present evidence for rhythmic interaction: we find statistical causality between one individual's note onsets and the co-singer's onsets, and a higher than chance degree of synchrony in the duets. Finally, we find a sex-specific trade-off between individual isochrony and synchrony. Gibbon's plasticity for isochrony and rhythmic overlap may suggest a potential shared selective pressure for interactive vocal displays in singing primates. This pressure may have convergently shaped human and gibbon musicality while acting on a common neural primate substrate. Beyond humans, singing primates are promising models to understand how music and, specifically, a sense of rhythm originated in the primate phylogeny.
  • Randall, J., Van Hout, A., Weissenborn, J., & Baayen, R. H. (2004). Acquiring unaccusativity: A cross-linguistic look. In A. Alexiadou (Ed.), The unaccusativity puzzle (pp. 332-353). Oxford: Oxford University Press.

Share this page