Publications

Displaying 301 - 400 of 615
  • Levinson, S. C., & Wilkins, D. P. (Eds.). (2006). Grammars of space: Explorations in cognitive diversity. Cambridge: Cambridge University Press.
  • Levinson, S. C. (2006). Matrilineal clans and kin terms on Rossel Island. Anthropological Linguistics, 48, 1-43.

    Abstract

    Yélî Dnye, the language of Rossel Island, Louisiade archipelago, Papua New Guinea, is a non-Austronesian isolate of considerable interest for the prehistory of the area. The kin term, clan, and kinship systems have some superficial similarities with surrounding Austronesian ones, but many underlying differences. The terminology, here properly described for the first time, is highly complex, and seems adapted to a dual descent system, with Crow-type skewing reflecting matrilineal descent, but a system of reciprocals also reflecting the "unity of the patriline." It may be analyzed in three mutually consistent ways: as a system of classificatory reciprocals, as a clan-based sociocentric system, and as collapses and skewings across a genealogical net. It makes an interesting contrast to the Trobriand system, and suggests that the alternative types of account offered by Edmund Leach and Floyd Lounsbury for the Trobriand system both have application to the Rossel system. The Rossel system has features (e.g., patrilineal biases, dual descent, collective [dyadic] kin terms, terms for alternating generations) that may be indicative of pre-Austronesian social systems of the area
  • Levinson, S. C. (2006). Language in the 21st century. Language, 82, 1-2.
  • Levinson, S. C. (1996). Language and space. Annual Review of Anthropology, 25, 353-382. doi:10.1146/annurev.anthro.25.1.353.

    Abstract

    This review describes some recent, unexpected findings concerning variation in spatial language across cultures, and places them in the context of the general anthropology of space on the one hand, and theories of spatial cognition in the cognitive sciences on the other. There has been much concern with the symbolism of space in anthropological writings, but little on concepts of space in practical activities. This neglect of everyday spatial notions may be due to unwitting ethnocentrism, the assumption in Western thinking generally that notions of space are universally of a single kind. Recent work shows that systems of spatial reckoning and description can in fact be quite divergent across cultures, linguistic differences correlating with distinct cognitive tendencies. This unexpected cultural variation raises interesting questions concerning the relation between cultural and linguistic concepts and the biological foundations of cognition. It argues for more sophisticated models relating culture and cognition than we currently have available.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (2023). Gesture, spatial cognition and the evolution of language. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210481. doi:10.1098/rstb.2021.0481.

    Abstract

    Human communication displays a striking contrast between the diversity of languages and the universality of the principles underlying their use in conversation. Despite the importance of this interactional base, it is not obvious that it heavily imprints the structure of languages. However, a deep-time perspective suggests that early hominin communication was gestural, in line with all the other Hominidae. This gestural phase of early language development seems to have left its traces in the way in which spatial concepts, implemented in the hippocampus, provide organizing principles at the heart of grammar.
  • Levshina, N. (2023). Communicative efficiency: Language structure and use. Cambridge: Cambridge University Press.

    Abstract

    All living beings try to save effort, and humans are no exception. This groundbreaking book shows how we save time and energy during communication by unconsciously making efficient choices in grammar, lexicon and phonology. It presents a new theory of 'communicative efficiency', the idea that language is designed to be as efficient as possible, as a system of communication. The new framework accounts for the diverse manifestations of communicative efficiency across a typologically broad range of languages, using various corpus-based and statistical approaches to explain speakers' bias towards efficiency. The author's unique interdisciplinary expertise allows her to provide rich evidence from a broad range of language sciences. She integrates diverse insights from over a hundred years of research into this comprehensible new theory, which she presents step-by-step in clear and accessible language. It is essential reading for language scientists, cognitive scientists and anyone interested in language use and communication.
  • Levshina, N., Namboodiripad, S., Allassonnière-Tang, M., Kramer, M., Talamo, L., Verkerk, A., Wilmoth, S., Garrido Rodriguez, G., Gupton, T. M., Kidd, E., Liu, Z., Naccarato, C., Nordlinger, R., Panova, A., & Stoynova, N. (2023). Why we need a gradient approach to word order. Linguistics, 61(4), 825-883. doi:10.1515/ling-2021-0098.

    Abstract

    This article argues for a gradient approach to word order, which treats word order preferences, both within and across languages, as a continuous variable. Word order variability should be regarded as a basic assumption, rather than as something exceptional. Although this approach follows naturally from the emergentist usage-based view of language, we argue that it can be beneficial for all frameworks and linguistic domains, including language acquisition, processing, typology, language contact, language evolution and change, and formal approaches. Gradient approaches have been very fruitful in some domains, such as language processing, but their potential is not fully realized yet. This may be due to practical reasons. We discuss the most pressing methodological challenges in corpus-based and experimental research of word order and propose some practical solutions.
  • Lewis, A. G., Schoffelen, J.-M., Bastiaansen, M., & Schriefers, H. (2023). Is beta in agreement with the relatives? Using relative clause sentences to investigate MEG beta power dynamics during sentence comprehension. Psychophysiology, 60(10): e14332. doi:10.1111/psyp.14332.

    Abstract

    There remains some debate about whether beta power effects observed during sentence comprehension reflect ongoing syntactic unification operations (beta-syntax hypothesis), or instead reflect maintenance or updating of the sentence-level representation (beta-maintenance hypothesis). In this study, we used magnetoencephalography to investigate beta power neural dynamics while participants read relative clause sentences that were initially ambiguous between a subject- or an object-relative reading. An additional condition included a grammatical violation at the disambiguation point in the relative clause sentences. The beta-maintenance hypothesis predicts a decrease in beta power at the disambiguation point for unexpected (and less preferred) object-relative clause sentences and grammatical violations, as both signal a need to update the sentence-level representation. While the beta-syntax hypothesis also predicts a beta power decrease for grammatical violations due to a disruption of syntactic unification operations, it instead predicts an increase in beta power for the object-relative clause condition because syntactic unification at the point of disambiguation becomes more demanding. We observed decreased beta power for both the agreement violation and object-relative clause conditions in typical left hemisphere language regions, which provides compelling support for the beta-maintenance hypothesis. Mid-frontal theta power effects were also present for grammatical violations and object-relative clause sentences, suggesting that violations and unexpected sentence interpretations are registered as conflicts by the brain's domain-general error detection system.

    Additional information

    data
  • Lind, J., Persson, J., Ingvar, M., Larsson, A., Cruts, M., Van Broeckhoven, C., Adolfsson, R., Bäckman, L., Nilsson, L.-G., Petersson, K. M., & Nyberg, L. (2006). Reduced functional brain activity response in cognitively intact apolipoprotein E ε4 carriers. Brain, 129(5), 1240-1248. doi:10.1093/brain/awl054.

    Abstract

    The apolipoprotein E {varepsilon}4 (APOE {varepsilon}4) is the main known genetic risk factor for Alzheimer's disease. Genetic assessments in combination with other diagnostic tools, such as neuroimaging, have the potential to facilitate early diagnosis. In this large-scale functional MRI (fMRI) study, we have contrasted 30 APOE {varepsilon}4 carriers (age range: 49–74 years; 19 females), of which 10 were homozygous for the {varepsilon}4 allele, and 30 non-carriers with regard to brain activity during a semantic categorization task. Test groups were closely matched for sex, age and education. Critically, both groups were cognitively intact and thus symptom-free of Alzheimer's disease. APOE {varepsilon}4 carriers showed reduced task-related responses in the left inferior parietal cortex, and bilaterally in the anterior cingulate region. A dose-related response was observed in the parietal area such that diminution was most pronounced in homozygous compared with heterozygous carriers. In addition, contrasts of processing novel versus familiar items revealed an abnormal response in the right hippocampus in the APOE {varepsilon}4 group, mainly expressed as diminished sensitivity to the relative novelty of stimuli. Collectively, these findings indicate that genetic risk translates into reduced functional brain activity, in regions pertinent to Alzheimer's disease, well before alterations can be detected at the behavioural level.
  • Lingwood, J., Lampropoulou, S., De Bezena, C., Billington, J., & Rowland, C. F. (2023). Children’s engagement and caregivers’ use of language-boosting strategies during shared book reading: A mixed methods approach. Journal of Child Language, 50(6), 1436-1458. doi:10.1017/S0305000922000290.

    Abstract

    For shared book reading to be effective for language development, the adult and child need to be highly engaged. The current paper adopted a mixed-methods approach to investigate caregiver’s language-boosting behaviours and children’s engagement during shared book reading. The results revealed there were more instances of joint attention and caregiver’s use of prompts during moments of higher engagement. However, instances of most language-boosting behaviours were similar across episodes of higher and lower engagement. Qualitative analysis assessing the link between children’s engagement and caregiver’s use of speech acts, revealed that speech acts do seem to contribute to high engagement, in combination with other aspects of the interaction.
  • Liszkowski, U., Carpenter, M., Striano, T., & Tomasello, M. (2006). Twelve- and 18-month-olds point to provide information for others. JOURNAL OF COGNITION AND DEVELOPMENT, 7, 173-187. doi:10.1207/s15327647jcd0702_2.

    Abstract

    Classically, infants are thought to point for 2 main reasons: (a) They point imperatively when they want an adult to do something for them (e.g., give them something; “Juice!”), and (b) they point declaratively when they want an adult to share attention with them to some interesting event or object (“Look!”). Here we demonstrate the existence of another motive for infants' early pointing gestures: to inform another person of the location of an object that person is searching for. This informative motive for pointing suggests that from very early in ontogeny humans conceive of others as intentional agents with informational states and they have the motivation to provide such information communicatively
  • Lloyd, S. E., Pearce, S. H. S., Fisher, S. E., Steinmeyer, K., Schwappach, B., Scheinman, S. J., Harding, B., Bolino, A., Devoto, M., Goodyer, P., Rigden, S. P. A., Wrong, O., Jentsch, T. J., Craig, I. W., & Thakker, R. V. (1996). A common molecular basis for three inherited kidney stone diseases [Letter to Nature]. Nature, 379, 445 -449. doi:10.1038/379445a0.

    Abstract

    Kidney stones (nephrolithiasis), which affect 12% of males and 5% of females in the western world, are familial in 45% of patients and are most commonly associated with hypercalciuria. Three disorders of hypercalciuric nephrolithiasis (Dent's disease, X-linked recessive nephrolithiasis (XRN), and X-linked recessive hypophosphataemic rickets (XLRH)) have been mapped to Xp11.22 (refs 5-7). A microdeletion in one Dent's disease kindred allowed the identification of a candidate gene, CLCN5 (refs 8,9) which encodes a putative renal chloride channel. Here we report the investigation of 11 kindreds with these renal tubular disorders for CLCN5 abnormalities; this identified three nonsense, four missense and two donor splice site mutations, together with one intragenic deletion and one microdeletion encompassing the entire gene. Heterologous expression of wild-type CLCN5 in Xenopus oocytes yielded outwardly rectifying chloride currents, which were either abolished or markedly reduced by the mutations. The common aetiology for Dent's disease, XRN and XLRH indicates that CLCN5 may be involved in other renal tubular disorders associated with kidney stones
  • Lumaca, M., Bonetti, L., Brattico, E., Baggio, G., Ravignani, A., & Vuust, P. (2023). High-fidelity transmission of auditory symbolic material is associated with reduced right–left neuroanatomical asymmetry between primary auditory regions. Cerebral Cortex, 33(11), 6902-6919. doi:10.1093/cercor/bhad009.

    Abstract

    The intergenerational stability of auditory symbolic systems, such as music, is thought to rely on brain processes that allow the faithful transmission of complex sounds. Little is known about the functional and structural aspects of the human brain which support this ability, with a few studies pointing to the bilateral organization of auditory networks as a putative neural substrate. Here, we further tested this hypothesis by examining the role of left–right neuroanatomical asymmetries between auditory cortices. We collected neuroanatomical images from a large sample of participants (nonmusicians) and analyzed them with Freesurfer’s surface-based morphometry method. Weeks after scanning, the same individuals participated in a laboratory experiment that simulated music transmission: the signaling games. We found that high accuracy in the intergenerational transmission of an artificial tone system was associated with reduced rightward asymmetry of cortical thickness in Heschl’s sulcus. Our study suggests that the high-fidelity copying of melodic material may rely on the extent to which computational neuronal resources are distributed across hemispheres. Our data further support the role of interhemispheric brain organization in the cultural transmission and evolution of auditory symbolic systems.
  • Lundstrom, B. N., Petersson, K. M., Andersson, J., Johansson, M., Fransson, P., & Ingvar, M. (2003). Isolating the retrieval of imagined pictures during episodic memory: Activation of the left precuneus and left prefrontal cortex. Neuroimage, 20, 1934-1943. doi:10.1016/j.neuroimage.2003.07.017.

    Abstract

    The posterior medial parietal cortex and the left prefrontal cortex have both been implicated in the recollection of past episodes. In order to clarify their functional significance, we performed this functional magnetic resonance imaging study, which employed event-related source memory and item recognition retrieval of words paired with corresponding imagined or viewed pictures. Our results suggest that episodic source memory is related to a functional network including the posterior precuneus and the left lateral prefrontal cortex. This network is activated during explicit retrieval of imagined pictures and results from the retrieval of item-context associations. This suggests that previously imagined pictures provide a context with which encoded words can be more strongly associated.
  • Lutte, G., Sarti, S., & Kempen, G. (1971). Le moi idéal de l'adolescent: Recherche génétique, différentielle et culturelle dans sept pays dÉurope. Bruxelles: Dessart.
  • Mace, R., Jordan, F., & Holden, C. (2003). Testing evolutionary hypotheses about human biological adaptation using cross-cultural comparison. Comparative Biochemistry and Physiology A-Molecular & Integrative Physiology, 136(1), 85-94. doi:10.1016/S1095-6433(03)00019-9.

    Abstract

    Physiological data from a range of human populations living in different environments can provide valuable information for testing evolutionary hypotheses about human adaptation. By taking into account the effects of population history, phylogenetic comparative methods can help us determine whether variation results from selection due to particular environmental variables. These selective forces could even be due to cultural traits-which means that gene-culture co-evolution may be occurring. In this paper, we outline two examples of the use of these approaches to test adaptive hypotheses that explain global variation in two physiological traits: the first is lactose digestion capacity in adults, and the second is population sex-ratio at birth. We show that lower than average sex ratio at birth is associated with high fertility, and argue that global variation in sex ratio at birth has evolved as a response to the high physiological costs of producing boys in high fertility populations.
  • Magnuson, J. S., Tanenhaus, M. K., Aslin, R. N., & Dahan, D. (2003). The time course of spoken word learning and recognition: Studies with artificial lexicons. Journal of Experimental Psychology: General, 132(2), 202-227. doi:10.1037/0096-3445.132.2.202.

    Abstract

    The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.
  • Magyari, L. (2003). Mit ne gondoljunk az állatokról? [What not to think about animals?] [Review of the book Wild Minds: What animals really think by M. Hauser]. Magyar Pszichológiai Szemle (Hungarian Psychological Review), 58(3), 417-424. doi:10.1556/MPSzle.58.2003.3.5.
  • Majid, A., Enfield, N. J., & Van Staden, M. (Eds.). (2006). Parts of the body: Cross-linguistic categorisation [Special Issue]. Language Sciences, 28(2-3).
  • Majid, A. (2003). Towards behavioural genomics. The Psychologist, 16(6), 298-298.
  • Majid, A., Sanford, A. J., & Pickering, M. J. (2006). Covariation and quantifier polarity: What determines causal attribution in vignettes? Cognition, 99(1), 35-51. doi:10.1016/j.cognition.2004.12.004.

    Abstract

    Tests of causal attribution often use verbal vignettes, with covariation information provided through statements quantified with natural language expressions. The effect of covariation information has typically been taken to show that set size information affects attribution. However, recent research shows that quantifiers provide information about discourse focus as well as covariation information. In the attribution literature, quantifiers are used to depict covariation, but they confound quantity and focus. In four experiments, we show that focus explains all (Experiment 1) or some (Experiments 2, 3 and 4) of the impact of covariation information on the attributions made, confirming the importance of the confound. Attribution experiments using vignettes that present covariation information with natural language quantifiers may overestimate the impact of set size information, and ignore the impact of quantifier-induced focus.
  • Majid, A. (2006). Body part categorisation in Punjabi. Language Sciences, 28(2-3), 241-261. doi:10.1016/j.langsci.2005.11.012.

    Abstract

    A key question in categorisation is to what extent people categorise in the same way, or differently. This paper examines categorisation of the body in Punjabi, an Indo-European language spoken in Pakistan and India. First, an inventory of body part terms is presented, illustrating how Punjabi speakers segment and categorise the body. There are some noteworthy terms in the inventory, which illustrate categories in Punjabi that are unusual when compared to other languages presented in this volume. Second, Punjabi speakers’ conceptualisation of the relationship between body parts is explored. While some body part terms are viewed as being partonomically related, others are viewed as being in a locative relationship. It is suggested that there may be key ways in which languages differ in both the categorisation of the body into parts, and in how these parts are related to one another.
  • Majid, A. (2003). Into the deep. The Psychologist, 16(6), 300-300.
  • Mak, M., Faber, M., & Willems, R. M. (2023). Different kinds of simulation during literary reading: Insights from a combined fMRI and eye-tracking study. Cortex, 162, 115-135. doi:10.1016/j.cortex.2023.01.014.

    Abstract

    Mental simulation is an important aspect of narrative reading. In a previous study, we found that gaze durations are differentially impacted by different kinds of mental simulation. Motor simulation, perceptual simulation, and mentalizing as elicited by literary short stories influenced eye movements in distinguishable ways (Mak & Willems, 2019). In the current study, we investigated the existence of a common neural locus for these different kinds of simulation. We additionally investigated whether individual differences during reading, as indexed by the eye movements, are reflected in domain-specific activations in the brain. We found a variety of brain areas activated by simulation-eliciting content, both modality-specific brain areas and a general simulation area. Individual variation in percent signal change in activated areas was related to measures of story appreciation as well as personal characteristics (i.e., transportability, perspective taking). Taken together, these findings suggest that mental simulation is supported by both domain-specific processes grounded in previous experiences, and by the neural mechanisms that underlie higher-order language processing (e.g., situation model building, event indexing, integration).

    Additional information

    figures localizer tasks appendix C1
  • Mak, W. M., Vonk, W., & Schriefers, H. (2006). Animacy in processing relative clauses: The hikers that rocks crush. Journal of Memory and Language, 54(4), 466-490. doi:10.1016/j.jml.2006.01.001.

    Abstract

    For several languages, a preference for subject relative clauses over object relative clauses has been reported. However, Mak, Vonk, and Schriefers (2002) showed that there is no such preference for relative clauses with an animate subject and an inanimate object. A Dutch object relative clause as …de rots, die de wandelaars beklommen hebben… (‘the rock, that the hikers climbed’) did not show longer reading times than its subject relative clause counterpart …de wandelaars, die de rots beklommen hebben… (‘the hikers, who climbed the rock’). In the present paper, we explore the factors that might contribute to this modulation of the usual preference for subject relative clauses. Experiment 1 shows that the animacy of the antecedent per se is not the decisive factor. On the contrary, in relative clauses with an inanimate antecedent and an inanimate relative-clause-internal noun phrase, the usual preference for subject relative clauses is found. In Experiments 2 and 3, subject and object relative clauses were contrasted in which either the subject or the object was inanimate. The results are interpreted in a framework in which the choice for an analysis of the relative clause is based on the interplay of animacy with topichood and verb semantics. This framework accounts for the commonly reported preference for subject relative clauses over object relative clauses as well as for the pattern of data found in the present experiments.
  • Mamus, E., Speed, L. J., Rissman, L., Majid, A., & Özyürek, A. (2023). Lack of visual experience affects multimodal language production: Evidence from congenitally blind and sighted people. Cognitive Science, 47(1): e13228. doi:10.1111/cogs.13228.

    Abstract

    The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
  • Mamus, E., Speed, L., Özyürek, A., & Majid, A. (2023). The effect of input sensory modality on the multimodal encoding of motion events. Language, Cognition and Neuroscience, 38(5), 711-723. doi:10.1080/23273798.2022.2141282.

    Abstract

    Each sensory modality has different affordances: vision has higher spatial acuity than audition, whereas audition has better temporal acuity. This may have consequences for the encoding of events and its subsequent multimodal language production—an issue that has received relatively little attention to date. In this study, we compared motion events presented as audio-only, visual-only, or multimodal (visual + audio) input and measured speech and co-speech gesture depicting path and manner of motion in Turkish. Input modality affected speech production. Speakers with audio-only input produced more path descriptions and fewer manner descriptions in speech compared to speakers who received visual input. In contrast, the type and frequency of gestures did not change across conditions. Path-only gestures dominated throughout. Our results suggest that while speech is more susceptible to auditory vs. visual input in encoding aspects of motion events, gesture is less sensitive to such differences.

    Additional information

    Supplemental material
  • Mangione-Smith, R., Elliott, M. N., Stivers, T., McDonald, L. L., & Heritage, J. (2006). Ruling out the need for antibiotics: Are we sending the right message? Archives of Pediatrics & Adolescent Medicine, 160(9), 945-952.
  • Mangione-Smith, R., Stivers, T., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Online commentary during the physical examination: A communication tool for avoiding inappropriate antibiotic prescribing? Social Science and Medicine, 56(2), 313-320.
  • Manhardt, F., Brouwer, S., Van Wijk, E., & Özyürek, A. (2023). Word order preference in sign influences speech in hearing bimodal bilinguals but not vice versa: Evidence from behavior and eye-gaze. Bilingualism: Language and Cognition, 26(1), 48-61. doi:10.1017/S1366728922000311.

    Abstract

    We investigated cross-modal influences between speech and sign in hearing bimodal bilinguals, proficient in a spoken and a sign language, and its consequences on visual attention during message preparation using eye-tracking. We focused on spatial expressions in which sign languages, unlike spoken languages, have a modality-driven preference to mention grounds (big objects) prior to figures (smaller objects). We compared hearing bimodal bilinguals’ spatial expressions and visual attention in Dutch and Dutch Sign Language (N = 18) to those of their hearing non-signing (N = 20) and deaf signing peers (N = 18). In speech, hearing bimodal bilinguals expressed more ground-first descriptions and fixated grounds more than hearing non-signers, showing influence from sign. In sign, they used as many ground-first descriptions as deaf signers and fixated grounds equally often, demonstrating no influence from speech. Cross-linguistic influence of word order preference and visual attention in hearing bimodal bilinguals appears to be one-directional modulated by modality-driven differences.
  • Marcus, G. F., & Fisher, S. E. (2003). FOXP2 in focus: What can genes tell us about speech and language? Trends in Cognitive Sciences, 7, 257-262. doi:10.1016/S1364-6613(03)00104-9.

    Abstract

    The human capacity for acquiring speech and language must derive, at least in part, from the genome. In 2001, a study described the first case of a gene, FOXP2, which is thought to be implicated in our ability to acquire spoken language. In the present article, we discuss how this gene was discovered, what it might do, how it relates to other genes, and what it could tell us about the nature of speech and language development. We explain how FOXP2 could, without being specific to the brain or to our own species, still provide an invaluable entry-point into understanding the genetic cascades and neural pathways that contribute to our capacity for speech and language.
  • Marlow, A. J., Fisher, S. E., Francks, C., MacPhie, I. L., Cherny, S. S., Richardson, A. J., Talcott, J. B., Stein, J. F., Monaco, A. P., & Cardon, L. R. (2003). Use of multivariate linkage analysis for dissection of a complex cognitive trait. American Journal of Human Genetics, 72(3), 561-570. doi:10.1086/368201.

    Abstract

    Replication of linkage results for complex traits has been exceedingly difficult, owing in part to the inability to measure the precise underlying phenotype, small sample sizes, genetic heterogeneity, and statistical methods employed in analysis. Often, in any particular study, multiple correlated traits have been collected, yet these have been analyzed independently or, at most, in bivariate analyses. Theoretical arguments suggest that full multivariate analysis of all available traits should offer more power to detect linkage; however, this has not yet been evaluated on a genomewide scale. Here, we conduct multivariate genomewide analyses of quantitative-trait loci that influence reading- and language-related measures in families affected with developmental dyslexia. The results of these analyses are substantially clearer than those of previous univariate analyses of the same data set, helping to resolve a number of key issues. These outcomes highlight the relevance of multivariate analysis for complex disorders for dissection of linkage results in correlated traits. The approach employed here may aid positional cloning of susceptibility genes in a wide spectrum of complex traits.
  • Maskalenka, K., Alagöz, G., Krueger, F., Wright, J., Rostovskaya, M., Nakhuda, A., Bendall, A., Krueger, C., Walker, S., Scally, A., & Rugg-Gunn, P. J. (2023). NANOGP1, a tandem duplicate of NANOG, exhibits partial functional conservation in human naïve pluripotent stem cells. Development, 150(2): dev201155. doi:10.1242/dev.201155.

    Abstract

    Gene duplication events can drive evolution by providing genetic material for new gene functions, and they create opportunities for diverse developmental strategies to emerge between species. To study the contribution of duplicated genes to human early development, we examined the evolution and function of NANOGP1, a tandem duplicate of the transcription factor NANOG. We found that NANOGP1 and NANOG have overlapping but distinct expression profiles, with high NANOGP1 expression restricted to early epiblast cells and naïve-state pluripotent stem cells. Sequence analysis and epitope-tagging revealed that NANOGP1 is protein coding with an intact homeobox domain. The duplication that created NANOGP1 occurred earlier in primate evolution than previously thought and has been retained only in great apes, whereas Old World monkeys have disabled the gene in different ways, including homeodomain point mutations. NANOGP1 is a strong inducer of naïve pluripotency; however, unlike NANOG, it is not required to maintain the undifferentiated status of human naïve pluripotent cells. By retaining expression, sequence and partial functional conservation with its ancestral copy, NANOGP1 exemplifies how gene duplication and subfunctionalisation can contribute to transcription factor activity in human pluripotency and development.
  • Mazzini, S., Holler, J., & Drijvers, L. (2023). Studying naturalistic human communication using dual-EEG and audio-visual recordings. STAR Protocols, 4(3): 102370. doi:10.1016/j.xpro.2023.102370.

    Abstract

    We present a protocol to study naturalistic human communication using dual-EEG and audio-visual recordings. We describe preparatory steps for data collection including setup preparation, experiment design, and piloting. We then describe the data collection process in detail which consists of participant recruitment, experiment room preparation, and data collection. We also outline the kinds of research questions that can be addressed with the current protocol, including several analysis possibilities, from conversational to advanced time-frequency analyses.
    For complete details on the use and execution of this protocol, please refer to Drijvers and Holler (2022).
  • McConnell, K. (2023). Individual Differences in Holistic and Compositional Language Processing. Journal of Cognition, 6. doi:10.5334/joc.283.

    Abstract

    Individual differences in cognitive abilities are ubiquitous across the spectrum of proficient language users. Although speakers differ with regard to their memory capacity, ability for inhibiting distraction, and ability to shift between different processing levels, comprehension is generally successful. However, this does not mean it is identical across individuals; listeners and readers may rely on different processing strategies to exploit distributional information in the service of efficient understanding. In the following psycholinguistic reading experiment, we investigate potential sources of individual differences in the processing of co-occurring words. Participants read modifier-noun bigrams like absolute silence in a self-paced reading task. Backward transition probability (BTP) between the two lexemes was used to quantify the prominence of the bigram as a whole in comparison to the frequency of its parts. Of five individual difference measures (processing speed, verbal working memory, cognitive inhibition, global-local scope shifting, and personality), two proved to be significantly associated with the effect of BTP on reading times. Participants who could inhibit a distracting global environment in order to more efficiently retrieve a single part and those that preferred the local level in the shifting task showed greater effects of the co-occurrence probability of the parts. We conclude that some participants are more likely to retrieve bigrams via their parts and their co-occurrence statistics whereas others more readily retrieve the two words together as a single chunked unit.
  • McLean, B., Dunn, M., & Dingemanse, M. (2023). Two measures are better than one: Combining iconicity ratings and guessing experiments for a more nuanced picture of iconicity in the lexicon. Language and Cognition, 15(4), 719-739. doi:10.1017/langcog.2023.9.

    Abstract

    Iconicity in language is receiving increased attention from many fields, but our understanding of iconicity is only as good as the measures we use to quantify it. We collected iconicity measures for 304 Japanese words from English-speaking participants, using rating and guessing tasks. The words included ideophones (structurally marked depictive words) along with regular lexical items from similar semantic domains (e.g., fuwafuwa ‘fluffy’, jawarakai ‘soft’). The two measures correlated, speaking to their validity. However, ideophones received consistently higher iconicity ratings than other items, even when guessed at the same accuracies, suggesting the rating task is more sensitive to cues like structural markedness that frame words as iconic. These cues did not always guide participants to the meanings of ideophones in the guessing task, but they did make them more confident in their guesses, even when they were wrong. Consistently poor guessing results reflect the role different experiences play in shaping construals of iconicity. Using multiple measures in tandem allows us to explore the interplay between iconicity and these external factors. To facilitate this, we introduce a reproducible workflow for creating rating and guessing tasks from standardised wordlists, while also making improvements to the robustness, sensitivity and discriminability of previous approaches.
  • McQueen, J. M., Cutler, A., & Norris, D. (2006). Phonological abstraction in the mental lexicon. Cognitive Science, 30(6), 1113-1126. doi:10.1207/s15516709cog0000_79.

    Abstract

    A perceptual learning experiment provides evidence that the mental lexicon cannot consist solely of detailed acoustic traces of recognition episodes. In a training lexical decision phase, listeners heard an ambiguous [f–s] fricative sound, replacing either [f] or [s] in words. In a test phase, listeners then made lexical decisions to visual targets following auditory primes. Critical materials were minimal pairs that could be a word with either [f] or [s] (cf. English knife–nice), none of which had been heard in training. Listeners interpreted the minimal pair words differently in the second phase according to the training received in the first phase. Therefore, lexically mediated retuning of phoneme perception not only influences categorical decisions about fricatives (Norris, McQueen, & Cutler, 2003), but also benefits recognition of words outside the training set. The observed generalization across words suggests that this retuning occurs prelexically. Therefore, lexical processing involves sublexical phonological abstraction, not only accumulation of acoustic episodes.
  • McQueen, J. M., Norris, D., & Cutler, A. (2006). The dynamic nature of speech perception. Language and Speech, 49(1), 101-112.

    Abstract

    The speech perception system must be flexible in responding to the variability in speech sounds caused by differences among speakers and by language change over the lifespan of the listener. Indeed, listeners use lexical knowledge to retune perception of novel speech (Norris, McQueen, & Cutler, 2003). In that study, Dutch listeners made lexical decisions to spoken stimuli, including words with an ambiguous fricative (between [f] and [s]), in either [f]- or [s]-biased lexical contexts. In a subsequent categorization test, the former group of listeners identified more sounds on an [εf] - [εs] continuum as [f] than the latter group. In the present experiment, listeners received the same exposure and test stimuli, but did not make lexical decisions to the exposure items. Instead, they counted them. Categorization results were indistinguishable from those obtained earlier. These adjustments in fricative perception therefore do not depend on explicit judgments during exposure. This learning effect thus reflects automatic retuning of the interpretation of acoustic-phonetic information.
  • McQueen, J. M. (2003). The ghost of Christmas future: Didn't Scrooge learn to be good? Commentary on Magnuson, McMurray, Tanenhaus and Aslin (2003). Cognitive Science, 27(5), 795-799. doi:10.1207/s15516709cog2705_6.

    Abstract

    Magnuson, McMurray, Tanenhaus, and Aslin [Cogn. Sci. 27 (2003) 285] suggest that they have evidence of lexical feedback in speech perception, and that this evidence thus challenges the purely feedforward Merge model [Behav. Brain Sci. 23 (2000) 299]. This evidence is open to an alternative explanation, however, one which preserves the assumption in Merge that there is no lexical-prelexical feedback during on-line speech processing. This explanation invokes the distinction between perceptual processing that occurs in the short term, as an utterance is heard, and processing that occurs over the longer term, for perceptual learning.
  • McQueen, J. M., Norris, D., & Cutler, A. (2006). Are there really interactive processes in speech perception? Trends in Cognitive Sciences, 10(12), 533-533. doi:10.1016/j.tics.2006.10.004.
  • McQueen, J. M., Cutler, A., & Norris, D. (2003). Flow of information in the spoken word recognition system. Speech Communication, 41(1), 257-270. doi:10.1016/S0167-6393(02)00108-5.

    Abstract

    Spoken word recognition consists of two major component processes. First, at the prelexical stage, an abstract description of the utterance is generated from the information in the speech signal. Second, at the lexical stage, this description is used to activate all the words stored in the mental lexicon which match the input. These multiple candidate words then compete with each other. We review evidence which suggests that positive (match) and negative (mismatch) information of both a segmental and a suprasegmental nature is used to constrain this activation and competition process. We then ask whether, in addition to the necessary influence of the prelexical stage on the lexical stage, there is also feedback from the lexicon to the prelexical level. In two phonetic categorization experiments, Dutch listeners were asked to label both syllable-initial and syllable-final ambiguous fricatives (e.g., sounds ranging from [f] to [s]) in the word–nonword series maf–mas, and the nonword–word series jaf–jas. They tended to label the sounds in a lexically consistent manner (i.e., consistent with the word endpoints of the series). These lexical effects became smaller in listeners’ slower responses, even when the listeners were put under pressure to respond as fast as possible. Our results challenge models of spoken word recognition in which feedback modulates the prelexical analysis of the component sounds of a word whenever that word is heard
  • McQueen, J. M., Jesse, A., & Mitterer, H. (2023). Lexically mediated compensation for coarticulation still as elusive as a white christmash. Cognitive Science: a multidisciplinary journal, 47(9): e13342. doi:10.1111/cogs.13342.

    Abstract

    Luthra, Peraza-Santiago, Beeson, Saltzman, Crinnion, and Magnuson (2021) present data from the lexically mediated compensation for coarticulation paradigm that they claim provides conclusive evidence in favor of top-down processing in speech perception. We argue here that this evidence does not support that conclusion. The findings are open to alternative explanations, and we give data in support of one of them (that there is an acoustic confound in the materials). Lexically mediated compensation for coarticulation thus remains elusive, while prior data from the paradigm instead challenge the idea that there is top-down processing in online speech recognition.

    Additional information

    supplementary materials
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2003). Planning levels in naming and reading complex numerals. Memory & Cognition, 31(8), 1238-1249.

    Abstract

    On the basis of evidence from studies of the naming and reading of numerals, Ferrand (1999) argued that the naming of objects is slower than reading their names, due to a greater response uncertainty in naming than in reading, rather than to an obligatory conceptual preparation for naming, but not for reading. We manipulated the need for conceptual preparation, while keeping response uncertainty constant in the naming and reading of complex numerals. In Experiment 1, participants named three-digit Arabic numerals either as house numbers or clock times. House number naming latencies were determined mostly by morphophonological factors, such as morpheme frequency and the number of phonemes, whereas clock time naming latencies revealed an additional conceptual involvement. In Experiment 2, the numerals were presented in alphabetic format and had to be read aloud. Reading latencies were determined mostly by morphophonological factors in both modes. These results suggest that conceptual preparation, rather than response uncertainty, is responsible for the difference between naming and reading latencies.
  • Mehta, G., & Cutler, A. (1988). Detection of target phonemes in spontaneous and read speech. Language and Speech, 31, 135-156.

    Abstract

    Although spontaneous speech occurs more frequently in most listeners’ experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalize to the recognition of spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Response were, overall, equally fast in each speech mode. However analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than earlier targets, and targets preceded by long words were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claim from previous work that listeners pay great attention to prosodic information in the process of recognizing speech.
  • Menenti, L. (2006). L2-L1 word association in bilinguals: Direct evidence. Nijmegen CNS, 1, 17-24.

    Abstract

    The Revised Hierarchical Model (Kroll and Stewart, 1994) assumes that words in a bilingual’s languages have separate word form representations but shared conceptual representations. Two routes lead from an L2 word form to its conceptual representation: the word association route, where concepts are accessed through the corresponding L1 word form, and the concept mediation route, with direct access from L2 to concepts. To investigate word association, we presented proficient late German-Dutch bilinguals with L2 non-cognate word pairs in which the L1 translation of the first word rhymed with the second word (e.g. GRAP (joke) – Witz – FIETS (bike)). If the first word in a pair activated its L1 equivalent, then a phonological priming effect on the second word was expected. Priming was observed in lexical decision but not in semantic decision (living/non-living) on L2 words. In a control group of Dutch native speakers, no priming effect was found. This suggests that proficient bilinguals still make use of their L1 word form lexicon to process L2 in lexical decision.
  • Meyer, A. S., Roelofs, A., & Levelt, W. J. M. (2003). Word length effects in object naming: The role of a response criterion. Journal of Memory and Language, 48(1), 131-147. doi:10.1016/S0749-596X(02)00509-0.

    Abstract

    According to Levelt, Roelofs, and Meyer (1999) speakers generate the phonological and phonetic representations of successive syllables of a word in sequence and only begin to speak after having fully planned at least one complete phonological word. Therefore, speech onset latencies should be longer for long than for short words. We tested this prediction in four experiments in which Dutch participants named or categorized objects with monosyllabic or disyllabic names. Experiment 1 yielded a length effect on production latencies when objects with long and short names were tested in separate blocks, but not when they were mixed. Experiment 2 showed that the length effect was not due to a difference in the ease of object recognition. Experiment 3 replicated the results of Experiment 1 using a within-participants design. In Experiment 4, the long and short target words appeared in a phrasal context. In addition to the speech onset latencies, we obtained the viewing times for the target objects, which have been shown to depend on the time necessary to plan the form of the target names. We found word length effects for both dependent variables, but only when objects with short and long names were presented in separate blocks. We argue that in pure and mixed blocks speakers used different response deadlines, which they tried to meet by either generating the motor programs for one syllable or for all syllables of the word before speech onset. Computer simulations using WEAVER++ support this view.
  • Meyer, A. S., Levelt, W. J. M., & Wissink, M. T. (1996). Een modulair model van zinsproductie. Logopedie, 9(2), 21-31.

    Abstract

    In deze bijdrage wordt een modulair model van zinsproductie besproken. De planningsprocessen, die aan de productie van een zin voorafgaan, kunnen in twee hoofdcomponenten onderverdeeld worden: deconceptualisering (het bedenken van de inhoud van de uiting) en de formulering (het vastleggen van de linguïstische vorm). Het formuleringsproces bestaat weer uit twee componenten, te weten de grammatische en fonologische codering. Ook deze componenten bestaan elk weer uit een aantal subcomponenten. Dit artikel beschrijft wat de specifieke taak van iedere component is, hoe deze uitgevoerd wordt en hoe de componenten samenwerken. Tevens worden enkele belangrijke methoden van taalproductie-onderzoek besproken.
  • Meyer, A. S., & Wheeldon, L. (Eds.). (2006). Language production across the life span [Special Issue]. Language and Cognitive Processes, 21(1-3).
  • Meyer, A. S. (1996). Lexical access in phrase and sentence production: Results from picture-word interference experiments. Journal of Memory and Language, 35, 477-496. doi:doi:10.1006/jmla.1996.0026.

    Abstract

    Four experiments investigated the span of advance planning for phrases and short sentences. Dutch subjects were presented with pairs of objects, which they named using noun-phrase conjunctions (e.g., the translation equivalent of ''the arrow and the bag'') or sentences (''the arrow is next to the bag''). Each display was accompanied by an auditory distracter, which was related in form or meaning to the first or second noun of the utterance or unrelated to both. For sentences and phrases, the mean speech onset time was longer when the distracter was semantically related to the first or second noun and shorter when it was phonologically related to the first noun than when it was unrelated. No phonological facilitation was found for the second noun. This suggests that before utterance onset both target lemmas and the first target form were selected.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Meyer, A. S. (2023). Timing in conversation. Journal of Cognition, 6(1), 1-17. doi:10.5334/joc.268.

    Abstract

    Turn-taking in everyday conversation is fast, with median latencies in corpora of conversational speech often reported to be under 300 ms. This seems like magic, given that experimental research on speech planning has shown that speakers need much more time to plan and produce even the shortest of utterances. This paper reviews how language scientists have combined linguistic analyses of conversations and experimental work to understand the skill of swift turn-taking and proposes a tentative solution to the riddle of fast turn-taking.
  • Mickan, A., McQueen, J. M., Brehm, L., & Lemhöfer, K. (2023). Individual differences in foreign language attrition: A 6-month longitudinal investigation after a study abroad. Language, Cognition and Neuroscience, 38(1), 11-39. doi:10.1080/23273798.2022.2074479.

    Abstract

    While recent laboratory studies suggest that the use of competing languages is a driving force in foreign language (FL) attrition (i.e. forgetting), research on “real” attriters has failed to demonstrate
    such a relationship. We addressed this issue in a large-scale longitudinal study, following German students throughout a study abroad in Spain and their first six months back in Germany. Monthly,
    percentage-based frequency of use measures enabled a fine-grained description of language use.
    L3 Spanish forgetting rates were indeed predicted by the quantity and quality of Spanish use, and
    correlated negatively with L1 German and positively with L2 English letter fluency. Attrition rates
    were furthermore influenced by prior Spanish proficiency, but not by motivation to maintain
    Spanish or non-verbal long-term memory capacity. Overall, this study highlights the importance
    of language use for FL retention and sheds light on the complex interplay between language
    use and other determinants of attrition.
  • Mishra, C., Offrede, T., Fuchs, S., Mooshammer, C., & Skantze, G. (2023). Does a robot’s gaze aversion affect human gaze aversion? Frontiers in Robotics and AI, 10: 1127626. doi:10.3389/frobt.2023.1127626.

    Abstract

    Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot’s gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot’s lack of gaze aversion.
  • Mishra, C., Verdonschot, R. G., Hagoort, P., & Skantze, G. (2023). Real-time emotion generation in human-robot dialogue using large language models. Frontiers in Robotics and AI, 10: 1271610. doi:10.3389/frobt.2023.1271610.

    Abstract

    Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot’s turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot’s emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.
  • Mitterer, H. (2006). On the causes of compensation for coarticulation: Evidence for phonological mediation. Perception & Psychophysics, 68(7), 1227-1240.

    Abstract

    This study examined whether compensation for coarticulation in fricative–vowel syllables is phonologically mediated or a consequence of auditory processes. Smits (2001a) had shown that compensation occurs for anticipatory lip rounding in a fricative caused by a following rounded vowel in Dutch. In a first experiment, the possibility that compensation is due to general auditory processing was investigated using nonspeech sounds. These did not cause context effects akin to compensation for coarticulation, although nonspeech sounds influenced speech sound identification in an integrative fashion. In a second experiment, a possible phonological basis for compensation for coarticulation was assessed by using audiovisual speech. Visual displays, which induced the perception of a rounded vowel, also influenced compensation for anticipatory lip rounding in the fricative. These results indicate that compensation for anticipatory lip rounding in fricative–vowel syllables is phonologically mediated. This result is discussed in the light of other compensation-for-coarticulation findings and general theories of speech perception.
  • Mitterer, H., Csépe, V., & Blomert, L. (2006). The role of perceptual integration in the recognition of assimilated word forms. Quarterly Journal of Experimental Psychology, 59(8), 1395-1424. doi:10.1080/17470210500198726.

    Abstract

    We investigated how spoken words are recognized when they have been altered by phonological assimilation. Previous research has shown that there is a process of perceptual compensation for phonological assimilations. Three recently formulated proposals regarding the mechanisms for compensation for assimilation make different predictions with regard to the level at which compensation is supposed to occur as well as regarding the role of specific language experience. In the present study, Hungarian words and nonwords, in which a viable and an unviable liquid assimilation was applied, were presented to Hungarian and Dutch listeners in an identification task and a discrimination task. Results indicate that viably changed forms are difficult to distinguish from canonical forms independent of experience with the assimilation rule applied in the utterances. This reveals that auditory processing contributes to perceptual compensation for assimilation, while language experience has only a minor role to play when identification is required.
  • Mitterer, H., Csépe, V., Honbolygo, F., & Blomert, L. (2006). The recognition of phonologically assimilated words does not depend on specific language experience. Cognitive Science, 30(3), 451-479. doi:10.1207/s15516709cog0000_57.

    Abstract

    In a series of 5 experiments, we investigated whether the processing of phonologically assimilated utterances is influenced by language learning. Previous experiments had shown that phonological assimilations, such as /lean#bacon/→[leam bacon], are compensated for in perception. In this article, we investigated whether compensation for assimilation can occur without experience with an assimilation rule using automatic event-related potentials. Our first experiment indicated that Dutch listeners compensate for a Hungarian assimilation rule. Two subsequent experiments, however, failed to show compensation for assimilation by both Dutch and Hungarian listeners. Two additional experiments showed that this was due to the acoustic properties of the assimilated utterance, confirming earlier reports that phonetic detail is important in compensation for assimilation. Our data indicate that compensation for assimilation can occur without experience with an assimilation rule, in line with phonetic–phonological theories that assume that speech production is influenced by speech-perception abilities.
  • Mitterer, H. (2006). Is vowel normalization independent of lexical processing? Phonetica, 63(4), 209-229. doi:10.1159/000097306.

    Abstract

    Vowel normalization in speech perception was investigated in three experiments. The range of the second formant in a carrier phrase was manipulated and this affected the perception of a target vowel in a compensatory fashion: A low F2 range in the carrier phrase made it more likely that the target vowel was perceived as a front vowel, that is, with a high F2. Recent experiments indicated that this effect might be moderated by the lexical status of the constituents of the carrier phrase. Manipulation of the lexical status in the present experiments, however, did not affect vowel normalization. In contrast, the range of vowels in the carrier phrase did influence vowel normalization. If the carrier phrase consisted of mid-to-high front vowels only, vowel categories shifted only for mid-to-high front vowels. It is argued that these results are a challenge for episodic models of word recognition.
  • Mitterer, H., & Ernestus, M. (2006). Listeners recover /t/s that speakers reduce: Evidence from /t/-lenition in Dutch. Journal of Phonetics, 34(1), 73-103. doi:10.1016/j.wocn.2005.03.003.

    Abstract

    In everyday speech, words may be reduced. Little is known about the consequences of such reductions for spoken word comprehension. This study investigated /t/-lenition in Dutch in two corpus studies and three perceptual experiments. The production studies revealed that /t/-lenition is most likely to occur after [s] and before bilabial consonants. The perception experiments showed that listeners take into account both phonological context, phonetic detail, and the lexical status of the form in the interpretation of codas that may or may not contain a lenited word-final /t/. These results speak against models of word recognition that make hard decisions on a prelexical level.
  • Mitterer, H., & Stivers, T. (2006). Max-Planck-Institute for Psycholinguistics: Annual Report 2006. Nijmegen: MPI for Psycholinguistics.
  • Monaghan, P., Donnelly, S., Alcock, K., Bidgood, A., Cain, K., Durrant, S., Frost, R. L. A., Jago, L. S., Peter, M. S., Pine, J. M., Turnbull, H., & Rowland, C. F. (2023). Learning to generalise but not segment an artificial language at 17 months predicts children’s language skills 3 years later. Cognitive Psychology, 147: 101607. doi:10.1016/j.cogpsych.2023.101607.

    Abstract

    We investigated whether learning an artificial language at 17 months was predictive of children’s natural language vocabulary and grammar skills at 54 months. Children at 17 months listened to an artificial language containing non-adjacent dependencies, and were then tested on their learning to segment and to generalise the structure of the language. At 54 months, children were then tested on a range of standardised natural language tasks that assessed receptive and expressive vocabulary and grammar. A structural equation model demonstrated that learning the artificial language generalisation at 17 months predicted language abilities – a composite of vocabulary and grammar skills – at 54 months, whereas artificial language segmentation at 17 months did not predict language abilities at this age. Artificial language learning tasks – especially those that probe grammar learning – provide a valuable tool for uncovering the mechanisms driving children’s early language development.

    Additional information

    supplementary data
  • Morison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T. and 3 moreMorison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T., Fisher, S. E., Zweier, C., & Morgan, A. T. (2023). In-depth characterisation of a cohort of individuals with missense and loss-of-function variants disrupting FOXP2. Journal of Medical Genetics, 60(6), 597-607. doi:10.1136/jmg-2022-108734.

    Abstract

    Background
    Heterozygous disruptions of FOXP2 were the first identified molecular cause for severe speech disorder; childhood apraxia of speech (CAS), yet few cases have been reported, limiting knowledge of the condition.

    Methods
    Here we phenotyped 29 individuals from 18 families with pathogenic FOXP2-only variants (13 loss-of-function, 5 missense variants; 14 males; aged 2 years to 62 years). Health and development (cognitive, motor, social domains) was examined, including speech and language outcomes with the first cross-linguistic analysis of English and German.

    Results
    Speech disorders were prevalent (24/26, 92%) and CAS was most common (23/26, 89%), with similar speech presentations across English and German. Speech was still impaired in adulthood and some speech sounds (e.g. ‘th’, ‘r’, ‘ch’, ‘j’) were never acquired. Language impairments (22/26, 85%) ranged from mild to severe. Comorbidities included feeding difficulties in infancy (10/27, 37%), fine (14/27, 52%) and gross (14/27, 52%) motor impairment, anxiety (6/28, 21%), depression (7/28, 25%), and sleep disturbance (11/15, 44%). Physical features were common (23/28, 82%) but with no consistent pattern. Cognition ranged from average to mildly impaired, and was incongruent with language ability; for example, seven participants with severe language disorder had average non-verbal cognition.

    Conclusions
    Although we identify increased prevalence of conditions like anxiety, depression and sleep disturbance, we confirm that the consequences of FOXP2 dysfunction remain relatively specific to speech disorder, as compared to other recently identified monogenic conditions associated with CAS. Thus, our findings reinforce that FOXP2 provides a valuable entrypoint for examining the neurobiological bases of speech disorder.
  • Mortensen, L., Meyer, A. S., & Humphreys, G. W. (2006). Age-related effects on speech production: A review. Language and Cognitive Processes, 21, 238-290. doi:10.1080/01690960444000278.

    Abstract

    In discourse, older adults tend to be more verbose and more disfluent than young adults, especially when the task is difficult and when it places few constraints on the content of the utterance. This may be due to (a) language-specific deficits in planning the content and syntactic structure of utterances or in selecting and retrieving words from the mental lexicon, (b) a general deficit in inhibiting irrelevant information, or (c) the selection of a specific speech style. The possibility that older adults have a deficit in lexical retrieval is supported by the results of picture naming studies, in which older adults have been found to name objects less accurately and more slowly than young adults, and by the results of definition naming studies, in which older adults have been found to experience more tip-of-the-tongue (TOT) states than young adults. The available evidence suggests that these age differences are largely due to weakening of the connections linking word lemmas to phonological word forms, though adults above 70 years of age may have an additional deficit in lemma selection.
  • Muhinyi, A., & Rowland, C. F. (2023). Contributions of abstract extratextual talk and interactive style to preschoolers’ vocabulary development. Journal of Child Language, 50(1), 198-213. doi:10.1017/S0305000921000696.

    Abstract

    Caregiver abstract talk during shared reading predicts preschool-age children’s vocabulary development. However, previous research has focused on level of abstraction with less consideration of the style of extratextual talk. Here, we investigated the relation between these two dimensions of extratextual talk, and their contributions to variance in children’s vocabulary skills. Caregiver level of abstraction was associated with an interactive reading style. Controlling for socioeconomic status and child age, high interactivity predicted children’s concurrent vocabulary skills whereas abstraction did not. Controlling for earlier vocabulary skills, neither dimension of the extratextual talk predicted later vocabulary. Theoretical and practical relevance are discussed.
  • Müller, O., & Hagoort, P. (2006). Access to lexical information in language comprehension: Semantics before syntax. Journal of Cognitive Neuroscience, 18(1), 84-96. doi:10.1162/089892906775249997.

    Abstract

    The recognition of a word makes available its semantic and
    syntactic properties. Using electrophysiological recordings, we
    investigated whether one set of these properties is available
    earlier than the other set. Dutch participants saw nouns on a
    computer screen and performed push-button responses: In
    one task, grammatical gender determined response hand
    (left/right) and semantic category determined response execution
    (go/no-go). In the other task, response hand depended
    on semantic category, whereas response execution depended
    on gender. During the latter task, response preparation occurred
    on no-go trials, as measured by the lateralized
    readiness potential: Semantic information was used for
    response preparation before gender information inhibited
    this process. Furthermore, an inhibition-related N2 effect
    occurred earlier for inhibition by semantics than for inhibition
    by gender. In summary, electrophysiological measures
    of both response preparation and inhibition indicated that
    the semantic word property was available earlier than the
    syntactic word property when participants read single
    words.
  • Murphy, S. K., Nolan, C. M., Huang, Z., Kucera, K. S., Freking, B. A., Smith, T. P., Leymaster, K. A., Weidman, J. R., & Jirtle, a. R. L. (2006). Callipyge mutation affects gene expression in cis: A potential role for chromatin structure. Genome Research, 16, 340-346. doi:10.1101/gr.4389306.

    Abstract

    Muscular hypertrophy in callipyge sheep results from a single nucleotide substitution located in the genomic interval between the imprinted Delta, Drosophila, Homolog-like 1 (DLK1) and Maternally Expressed Gene 3 (MEG3). The mechanism linking the mutation to muscle hypertrophy is unclear but involves DLK1 overexpression. The mutation is contained within CLPG1 transcripts produced from this region. Herein we show that CLPG1 is expressed prenatally in the hypertrophy-responsive longissimus dorsi muscle by all four possible genotypes, but postnatal expression is restricted to sheep carrying the mutation. Surprisingly, the mutation results in nonimprinted monoallelic transcription of CLPG1 from only the mutated allele in adult sheep, whereas it is expressed biallelically during prenatal development. We further demonstrate that local CpG methylation is altered by the presence of the mutation in longissimus dorsi of postnatal sheep. For 10 CpG sites flanking the mutation, methylation is similar prenatally across genotypes, but doubles postnatally in normal sheep. This normal postnatal increase in methylation is significantly repressed in sheep carrying one copy of the mutation, and repressed even further in sheep with two mutant alleles. The attenuation in methylation status in the callipyge sheep correlates with the onset of the phenotype, continued CLPG1 transcription, and high-level expression of DLK1. In contrast, normal sheep exhibit hypermethylation of this locus after birth and CLPG1 silencing, which coincides with DLK1 transcriptional repression. These data are consistent with the notion that the callipyge mutation inhibits perinatal nucleation of regional chromatin condensation resulting in continued elevated transcription of prenatal DLK1 levels in adult callipyge sheep. We propose a model incorporating these results that can also account for the enigmatic normal phenotype of homozygous mutant sheep.
  • Narasimhan, B., & Gullberg, M. (2006). Perspective-shifts in event descriptions in Tamil child language. Journal of Child Language, 33(1), 99-124. doi:10.1017/S0305000905007191.

    Abstract

    Children are able to take multiple perspectives in talking about entities and events. But the nature of children's sensitivities to the complex patterns of perspective-taking in adult language is unknown. We examine perspective-taking in four- and six-year-old Tamil-speaking children describing placement events, as reflected in the use of a general placement verb (veyyii ‘put’) versus two fine-grained caused posture expressions specifying orientation, either vertical (nikka veyyii ‘make stand’) or horizontal (paDka veyyii ‘make lie’). We also explore whether animacy systematically promotes shifts to a fine-grained perspective. The results show that four- and six-year-olds switch perspectives as flexibly and systematically as adults do. Animacy influences shifts to a fine-grained perspective similarly across age groups. However, unexpectedly, six-year-olds also display greater overall sensitivity to orientation, preferring the vertical over the horizontal caused posture expression. Despite early flexibility, the factors governing the patterns of perspective-taking on events are undergoing change even in later childhood, reminiscent of U-shaped semantic reorganizations observed in children's lexical knowledge. The present study points to the intriguing possibility that mechanisms that operate at the level of semantics could also influence subtle patterns of lexical choice and perspective-shifts.
  • Narasimhan, B. (2003). Motion events and the lexicon: The case of Hindi. Lingua, 113(2), 123-160. doi:10.1016/S0024-3841(02)00068-2.

    Abstract

    English, and a variety of Germanic languages, allow constructions such as the bottle floated into the cave , whereas languages such as Spanish, French, and Hindi are highly restricted in allowing manner of motion verbs to occur with path phrases. This typological observation has been accounted for in terms of the conflation of complex meaning in basic or derived verbs [Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Levin, B., Rappaport-Hovav, M., 1995. Unaccusativity: At the Syntax–Lexical Semantics Interface. MIT Press, Cambridge, MA], or the presence of path “satellites” with special grammatical properties in the lexicon of languages such as English, which allow such phrasal combinations [cf. Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Talmy, L., 1991. Path to realisation: via aspect and result. In: Proceedings of the Seventeenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 480–520]. I use data from Hindi to show that there is little empirical support for the claim that the constraint on the phrasal combination is correlated with differences in verb meaning or the presence of satellites in the lexicon of a language. However, proposals which eschew lexicalization accounts for more general aspectual constraints on the manner verb + path phrase combination in Spanish-type languages (Aske, J., 1989. Path Predicates in English and Spanish: A Closer look. In: Proceedings of the Fifteenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 1–14) cannot account for the full range of data in Hindi either. On the basis of these facts, I argue that an empirically adequate account can be formulated in terms of a general mapping constraint, formulated in terms of whether the lexical requirements of the verb strictly or weakly constrain its syntactic privileges of occurrence. In Hindi, path phrases can combine with manner of motion verbs only to the degree that they are compatible with the semantic profile of the verb. Path phrases in English, on the other hand, can extend the verb's “semantic profile” subject to certain constraints. I suggest that path phrases are licensed in English by the semantic requirements of the “construction” in which they appear rather than by the selectional requirements of the verb (Fillmore, C., Kay, P., O'Connor, M.C., 1988, Regularity and idiomaticity in grammatical constructions. Language 64, 501–538; Jackendoff, 1990, Semantic Structures. MIT Press, Cambridge, MA; Goldberg, 1995, Constructions: A Construction Grammar Approach to Argument Structure. University of Chicago Press, Chicago and London).
  • Nederstigt, U. (2003). Auch and noch in child and adult German. Berlin: Mouton de Gruyter.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2006). When peanuts fall in love: N400 evidence for the power of discourse. Journal of Cognitive Neuroscience, 18(7), 1098-1111. doi:10.1162/jocn.2006.18.7.1098.

    Abstract

    In linguistic theories of how sentences encode meaning, a distinction is often made between the context-free rule-based combination of lexical–semantic features of the words within a sentence (‘‘semantics’’), and the contributions made by wider context (‘‘pragmatics’’). In psycholinguistics, this distinction has led to the view that listeners initially compute a local, context-independent meaning of a phrase or sentence before relating it to the wider context. An important aspect of such a two-step perspective on interpretation is that local semantics cannot initially be overruled by global contextual factors. In two spoken-language event-related potential experiments, we tested the viability of this claim by examining whether discourse context can overrule the impact of the core lexical–semantic feature animacy, considered to be an innate organizing principle of cognition. Two-step models of interpretation predict that verb–object animacy violations, as in ‘‘The girl comforted the clock,’’ will always perturb the unfolding interpretation process, regardless of wider context. When presented in isolation, such anomalies indeed elicit a clear N400 effect, a sign of interpretive problems. However, when the anomalies were embedded in a supportive context (e.g., a girl talking to a clock about his depression), this N400 effect disappeared completely. Moreover, given a suitable discourse context (e.g., a story about an amorous peanut), animacyviolating predicates (‘‘the peanut was in love’’) were actually processed more easily than canonical predicates (‘‘the peanut was salted’’). Our findings reveal that discourse context can immediately overrule local lexical–semantic violations, and therefore suggest that language comprehension does not involve an initially context-free semantic analysis.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2006). Individual differences and contextual bias in pronoun resolution: Evidence from ERPs. Brain Research, 1118(1), 155-167. doi:10.1016/j.brainres.2006.08.022.

    Abstract

    Although we usually have no trouble finding the right antecedent for a pronoun, the co-reference relations between pronouns and antecedents in everyday language are often ‘formally’ ambiguous. But a pronoun is only really ambiguous if a reader or listener indeed perceives it to be ambiguous. Whether this is the case may depend on at least two factors: the language processing skills of an individual reader, and the contextual bias towards one particular referential interpretation. In the current study, we used event related brain potentials (ERPs) to explore how both these factors affect the resolution of referentially ambiguous pronouns. We compared ERPs elicited by formally ambiguous and non-ambiguous pronouns that were embedded in simple sentences (e.g., “Jennifer Lopez told Madonna that she had too much money.”). Individual differences in language processing skills were assessed with the Reading Span task, while the contextual bias of each sentence (up to the critical pronoun) had been assessed in a referential cloze pretest. In line with earlier research, ambiguous pronouns elicited a sustained, frontal negative shift relative to non-ambiguous pronouns at the group-level. The size of this effect was correlated with Reading Span score, as well as with contextual bias. These results suggest that whether a reader perceives a formally ambiguous pronoun to be ambiguous is subtly co-determined by both individual language processing skills and contextual bias.
  • Noordman, L. G. M., & Vonk, W. (1998). Memory-based processing in understanding causal information. Discourse Processes, 191-212. doi:10.1080/01638539809545044.

    Abstract

    The reading process depends both on the text and on the reader. When we read a text, propositions in the current input are matched to propositions in the memory representation of the previous discourse but also to knowledge structures in long‐term memory. Therefore, memory‐based text processing refers both to the bottom‐up processing of the text and to the top‐down activation of the reader's knowledge. In this article, we focus on the role of cognitive structures in the reader's knowledge. We argue that causality is an important category in structuring human knowledge and that this property has consequences for text processing. Some research is discussed that illustrates that the more the information in the text reflects causal categories, the more easily the information is processed.
  • Norris, D., Cutler, A., McQueen, J. M., & Butterfield, S. (2006). Phonological and conceptual activation in speech comprehension. Cognitive Psychology, 53(2), 146-193. doi:10.1016/j.cogpsych.2006.03.001.

    Abstract

    We propose that speech comprehension involves the activation of token representations of the phonological forms of current lexical hypotheses, separately from the ongoing construction of a conceptual interpretation of the current utterance. In a series of cross-modal priming experiments, facilitation of lexical decision responses to visual target words (e.g., time) was found for targets that were semantic associates of auditory prime words (e.g., date) when the primes were isolated words, but not when the same primes appeared in sentence contexts. Identity priming (e.g., faster lexical decisions to visual date after spoken date than after an unrelated prime) appeared, however, both with isolated primes and with primes in prosodically neutral sentences. Associative priming in sentence contexts only emerged when sentence prosody involved contrastive accents, or when sentences were terminated immediately after the prime. Associative priming is therefore not an automatic consequence of speech processing. In no experiment was there associative priming from embedded words (e.g., sedate-time), but there was inhibitory identity priming (e.g., sedate-date) from embedded primes in sentence contexts. Speech comprehension therefore appears to involve separate distinct activation both of token phonological word representations and of conceptual word representations. Furthermore, both of these types of representation are distinct from the long-term memory representations of word form and meaning.
  • Norris, D., McQueen, J. M., & Cutler, A. (2003). Perceptual learning in speech. Cognitive Psychology, 47(2), 204-238. doi:10.1016/S0010-0285(03)00006-9.

    Abstract

    This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WI tlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]–[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).
  • Norris, D., Butterfield, S., McQueen, J. M., & Cutler, A. (2006). Lexically guided retuning of letter perception. Quarterly Journal of Experimental Psychology, 59(9), 1505-1515. doi:10.1080/17470210600739494.

    Abstract

    Participants made visual lexical decisions to upper-case words and nonwords, and then categorized an ambiguous N–H letter continuum. The lexical decision phase included different exposure conditions: Some participants saw an ambiguous letter “?”, midway between N and H, in N-biased lexical contexts (e.g., REIG?), plus words with unambiguousH(e.g., WEIGH); others saw the reverse (e.g., WEIG?, REIGN). The first group categorized more of the test continuum as N than did the second group. Control groups, who saw “?” in nonword contexts (e.g., SMIG?), plus either of the unambiguous word sets (e.g., WEIGH or REIGN), showed no such subsequent effects. Perceptual learning about ambiguous letters therefore appears to be based on lexical knowledge, just as in an analogous speech experiment (Norris, McQueen, & Cutler, 2003) which showed similar lexical influence in learning about ambiguous phonemes. We argue that lexically guided learning is an efficient general strategy available for exploitation by different specific perceptual tasks.
  • Norris, D., & Cutler, A. (1988). Speech recognition in French and English. MRC News, 39, 30-31.
  • Norris, D., & Cutler, A. (1988). The relative accessibility of phonemes and syllables. Perception and Psychophysics, 43, 541-550. Retrieved from http://www.psychonomic.org/search/view.cgi?id=8530.

    Abstract

    Previous research comparing detection times for syllables and for phonemes has consistently found that syllables are responded to faster than phonemes. This finding poses theoretical problems for strictly hierarchical models of speech recognition, in which smaller units should be able to be identified faster than larger units. However, inspection of the characteristics of previous experiments’stimuli reveals that subjects have been able to respond to syllables on the basis of only a partial analysis of the stimulus. In the present experiment, five groups of subjects listened to identical stimulus material. Phoneme and syllable monitoring under standard conditions was compared with monitoring under conditions in which near matches of target and stimulus occurred on no-response trials. In the latter case, when subjects were forced to analyze each stimulus fully, phonemes were detected faster than syllables.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Specific facial signals associate with categories of social actions conveyed through questions. PLoS One, 18(7): e0288104. doi:10.1371/journal.pone.0288104.

    Abstract

    The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker’s intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request “What time is it?”, an invitation “Will you come to my party?” or a criticism “Are you crazy?”). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction.

    Additional information

    supporting information
  • Nota, N., Trujillo, J. P., Jacobs, V., & Holler, J. (2023). Facilitating question identification through natural intensity eyebrow movements in virtual avatars. Scientific Reports, 13: 21295. doi:10.1038/s41598-023-48586-4.

    Abstract

    In conversation, recognizing social actions (similar to ‘speech acts’) early is important to quickly understand the speaker’s intended message and to provide a fast response. Fast turns are typical for fundamental social actions like questions, since a long gap can indicate a dispreferred response. In multimodal face-to-face interaction, visual signals may contribute to this fast dynamic. The face is an important source of visual signalling, and previous research found that prevalent facial signals such as eyebrow movements facilitate the rapid recognition of questions. We aimed to investigate whether early eyebrow movements with natural movement intensities facilitate question identification, and whether specific intensities are more helpful in detecting questions. Participants were instructed to view videos of avatars where the presence of eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) was manipulated, and to indicate whether the utterance in the video was a question or statement. Results showed higher accuracies for questions with eyebrow frowns, and faster response times for questions with eyebrow frowns and eyebrow raises. No additional effect was observed for the specific movement intensity. This suggests that eyebrow movements that are representative of naturalistic multimodal behaviour facilitate question recognition.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Conversational eyebrow frowns facilitate question identification: An online study using virtual avatars. Cognitive Science, 47(12): e13392. doi:10.1111/cogs.13392.

    Abstract

    Conversation is a time-pressured environment. Recognizing a social action (the ‘‘speech act,’’ such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers’ intentions.

    Additional information

    link to preprint
  • Nozais, V., Forkel, S. J., Petit, L., Talozzi, L., Corbetta, M., Thiebaut de Schotten, M., & Joliot, M. (2023). Atlasing white matter and grey matter joint contributions to resting-state networks in the human brain. Communications Biology, 6: 726. doi:10.1038/s42003-023-05107-3.

    Abstract

    Over the past two decades, the study of resting-state functional magnetic resonance imaging has revealed that functional connectivity within and between networks is linked to cognitive states and pathologies. However, the white matter connections supporting this connectivity remain only partially described. We developed a method to jointly map the white and grey matter contributing to each resting-state network (RSN). Using the Human Connectome Project, we generated an atlas of 30 RSNs. The method also highlighted the overlap between networks, which revealed that most of the brain’s white matter (89%) is shared between multiple RSNs, with 16% shared by at least 7 RSNs. These overlaps, especially the existence of regions shared by numerous networks, suggest that white matter lesions in these areas might strongly impact the communication within networks. We provide an atlas and an open-source software to explore the joint contribution of white and grey matter to RSNs and facilitate the study of the impact of white matter damage to these networks. In a first application of the software with clinical data, we were able to link stroke patients and impacted RSNs, showing that their symptoms aligned well with the estimated functions of the networks.
  • Numssen, O., van der Burght, C. L., & Hartwigsen, G. (2023). Revisiting the focality of non-invasive brain stimulation - implications for studies of human cognition. Neuroscience and Biobehavioral Reviews, 149: 105154. doi:10.1016/j.neubiorev.2023.105154.

    Abstract

    Non-invasive brain stimulation techniques are popular tools to investigate brain function in health and disease. Although transcranial magnetic stimulation (TMS) is widely used in cognitive neuroscience research to probe causal structure-function relationships, studies often yield inconclusive results. To improve the effectiveness of TMS studies, we argue that the cognitive neuroscience community needs to revise the stimulation focality principle – the spatial resolution with which TMS can differentially stimulate cortical regions. In the motor domain, TMS can differentiate between cortical muscle representations of adjacent fingers. However, this high degree of spatial specificity cannot be obtained in all cortical regions due to the influences of cortical folding patterns on the TMS-induced electric field. The region-dependent focality of TMS should be assessed a priori to estimate the experimental feasibility. Post-hoc simulations allow modeling of the relationship between cortical stimulation exposure and behavioral modulation by integrating data across stimulation sites or subjects.

    Files private

    Request files
  • Nyberg, L., Marklund, P., Persson, J., Cabeza, R., Forkstam, C., Petersson, K. M., & Ingvar, M. (2003). Common prefrontal activations during working memory, episodic memory, and semantic memory. Neuropsychologia, 41(3), 371-377. doi:10.1016/S0028-3932(02)00168-9.

    Abstract

    Regions of the prefrontal cortex (PFC) are typically activated in many different cognitive functions. In most studies, the focus has been on the role of specific PFC regions in specific cognitive domains, but more recently similarities in PFC activations across cognitive domains have been stressed. Such similarities may suggest that a region mediates a common function across a variety of cognitive tasks. In this study, we compared the activation patterns associated with tests of working memory, semantic memory and episodic memory. The results converged on a general involvement of four regions across memory tests. These were located in left frontopolar cortex, left mid-ventrolateral PFC, left mid-dorsolateral PFC and dorsal anterior cingulate cortex. These findings provide evidence that some PFC regions are engaged during many different memory tests. The findings are discussed in relation to theories about the functional contribition of the PFC regions and the architecture of memory.
  • Nyberg, L., Sandblom, J., Jones, S., Stigsdotter Neely, A., Petersson, K. M., Ingvar, M., & Bäckman, L. (2003). Neural correlates of training-related memory improvement in adulthood and aging. Proceedings of the National Academy of Sciences of the United States of America, 100(23), 13728-13733. doi:10.1073/pnas.1735487100.

    Abstract

    Cognitive studies show that both younger and older adults can increase their memory performance after training in using a visuospatial mnemonic, although age-related memory deficits tend to be magnified rather than reduced after training. Little is known about the changes in functional brain activity that accompany training-induced memory enhancement, and whether age-related activity changes are associated with the size of training-related gains. Here, we demonstrate that younger adults show increased activity during memory encoding in occipito-parietal and frontal brain regions after learning the mnemonic. Older adults did not show increased frontal activity, and only those elderly persons who benefited from the mnemonic showed increased occipitoparietal activity. These findings suggest that age-related differences in cognitive reserve capacity may reflect both a frontal processing deficiency and a posterior production deficiency.
  • O'Brien, D. P., & Bowerman, M. (1998). Martin D. S. Braine (1926–1996): Obituary. American Psychologist, 53, 563. doi:10.1037/0003-066X.53.5.563.

    Abstract

    Memorializes Martin D. S. Braine, whose research on child language acquisition and on both child and adult thinking and reasoning had a major influence on modern cognitive psychology. Addressing meaning as well as position, Braine argued that children start acquiring language by learning narrow-scope positional formulas that map components of meaning to positions in the utterance. These proposals were critical in starting discussions of the possible universality of the pivot-grammar stage and of the role of syntax, semantics,and pragmatics in children's early grammar and were pivotal to the rise of approaches in which cognitive development in language acquisition is stressed.
  • O'Connor, L. (2006). [Review of the book Toward a cognitive semantics: Concept structuring systems by Leonard Talmy]. Journal of Pragmatics, 38(7), 1126-1134. doi:10.1016/j.pragma.2005.08.007.
  • Ogdie, M. N., MacPhie, I. L., Minassian, S. L., Yang, M., Fisher, S. E., Francks, C., Cantor, R. M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2003). A genomewide scan for Attention-Deficit/Hyperactivity Disorder in an extended sample: Suggestive linkage on 17p11. American Journal of Human Genetics, 72(5), 1268-1279. doi:10.1086/375139.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD [MIM 143465]) is a common, highly heritable neurobehavioral disorder of childhood onset, characterized by hyperactivity, impulsivity, and/or inattention. As part of an ongoing study of the genetic etiology of ADHD, we have performed a genomewide linkage scan in 204 nuclear families comprising 853 individuals and 270 affected sibling pairs (ASPs). Previously, we reported genomewide linkage analysis of a “first wave” of these families composed of 126 ASPs. A follow-up investigation of one region on 16p yielded significant linkage in an extended sample. The current study extends the original sample of 126 ASPs to 270 ASPs and provides linkage analyses of the entire sample, using polymorphic microsatellite markers that define an ∼10-cM map across the genome. Maximum LOD score (MLS) analysis identified suggestive linkage for 17p11 (MLS=2.98) and four nominal regions with MLS values >1.0, including 5p13, 6q14, 11q25, and 20q13. These data, taken together with the fine mapping on 16p13, suggest two regions as highly likely to harbor risk genes for ADHD: 16p13 and 17p11. Interestingly, both regions, as well as 5p13, have been highlighted in genomewide scans for autism.
  • Ogdie, M. N., Bakker, S. C., Fisher, S. E., Francks, C., Yang, M. H., Cantor, R. M., Loo, S. K., Van der Meulen, E., Pearson, P., Buitelaar, J., Monaco, A., Nelson, S. F., Sinke, R. J., & Smalley, S. L. (2006). Pooled genome-wide linkage data on 424 ADHD ASPs suggests genetic heterogeneity and a common risk locus at 5p13 [Letter to the editor]. Molecular Psychiatry, 11, 5-8. doi:10.1038/sj.mp.4001760.
  • Oliveira‑Stahl, G., Farboud, S., Sterling, M. L., Heckman, J. J., Van Raalte, B., Lenferink, D., Van der Stam, A., Smeets, C. J. L. M., Fisher, S. E., & Englitz, B. (2023). High-precision spatial analysis of mouse courtship vocalization behavior reveals sex and strain differences. Scientific Reports, 13: 5219. doi:10.1038/s41598-023-31554-3.

    Abstract

    Mice display a wide repertoire of vocalizations that varies with sex, strain, and context. Especially during social interaction, including sexually motivated dyadic interaction, mice emit sequences of ultrasonic vocalizations (USVs) of high complexity. As animals of both sexes vocalize, a reliable attribution of USVs to their emitter is essential. The state-of-the-art in sound localization for USVs in 2D allows spatial localization at a resolution of multiple centimeters. However, animals interact at closer ranges, e.g. snout-to-snout. Hence, improved algorithms are required to reliably assign USVs. We present a novel algorithm, SLIM (Sound Localization via Intersecting Manifolds), that achieves a 2–3-fold improvement in accuracy (13.1–14.3 mm) using only 4 microphones and extends to many microphones and localization in 3D. This accuracy allows reliable assignment of 84.3% of all USVs in our dataset. We apply SLIM to courtship interactions between adult C57Bl/6J wildtype mice and those carrying a heterozygous Foxp2 variant (R552H). The improved spatial accuracy reveals that vocalization behavior is dependent on the spatial relation between the interacting mice. Female mice vocalized more in close snout-to-snout interaction while male mice vocalized more when the male snout was in close proximity to the female's ano-genital region. Further, we find that the acoustic properties of the ultrasonic vocalizations (duration, Wiener Entropy, and sound level) are dependent on the spatial relation between the interacting mice as well as on the genotype. In conclusion, the improved attribution of vocalizations to their emitters provides a foundation for better understanding social vocal behaviors.

    Additional information

    supplementary movies and figures
  • Otake, T., & Cutler, A. (Eds.). (1996). Phonological structure and language processing: Cross-linguistic studies. Berlin: Mounton de Gruyter.
  • Otake, T., Yoneyama, K., Cutler, A., & van der Lugt, A. (1996). The representation of Japanese moraic nasals. Journal of the Acoustical Society of America, 100, 3831-3842. doi:10.1121/1.417239.

    Abstract

    Nasal consonants in syllabic coda position in Japanese assimilate to the place of articulation of a following consonant. The resulting forms may be perceived as different realizations of a single underlying unit, and indeed the kana orthographies represent them with a single character. In the present study, Japanese listeners' response time to detect nasal consonants was measured. Nasals in coda position, i.e., moraic nasals, were detected faster and more accurately than nonmoraic nasals, as reported in previous studies. The place of articulation with which moraic nasals were realized affected neither response time nor accuracy. Non-native subjects who knew no Japanese, given the same materials with the same instructions, simply failed to respond to moraic nasals which were realized bilabially. When the nasals were cross-spliced across place of articulation contexts the Japanese listeners still showed no significant place of articulation effects, although responses were faster and more accurate to unspliced than to cross-spliced nasals. When asked to detect the phoneme following the (cross-spliced) moraic nasal, Japanese listeners showed effects of mismatch between nasal and context, but non-native listeners did not. Together, these results suggest that Japanese listeners are capable of very rapid abstraction from phonetic realization to a unitary representation of moraic nasals; but they can also use the phonetic realization of a moraic nasal effectively to obtain anticipatory information about following phonemes.
  • Özer, D., Karadöller, D. Z., Özyürek, A., & Göksun, T. (2023). Gestures cued by demonstratives in speech guide listeners' visual attention during spatial language comprehension. Journal of Experimental Psychology: General, 152(9), 2623-2635. doi:10.1037/xge0001402.

    Abstract

    Gestures help speakers and listeners during communication and thinking, particularly for visual-spatial information. Speakers tend to use gestures to complement the accompanying spoken deictic constructions, such as demonstratives, when communicating spatial information (e.g., saying “The candle is here” and gesturing to the right side to express that the candle is on the speaker's right). Visual information conveyed by gestures enhances listeners’ comprehension. Whether and how listeners allocate overt visual attention to gestures in different speech contexts is mostly unknown. We asked if (a) listeners gazed at gestures more when they complement demonstratives in speech (“here”) compared to when they express redundant information to speech (e.g., “right”) and (b) gazing at gestures related to listeners’ information uptake from those gestures. We demonstrated that listeners fixated gestures more when they expressed complementary than redundant information in the accompanying speech. Moreover, overt visual attention to gestures did not predict listeners’ comprehension. These results suggest that the heightened communicative value of gestures as signaled by external cues, such as demonstratives, guides listeners’ visual attention to gestures. However, overt visual attention does not seem to be necessary to extract the cued information from the multimodal message.
  • Ozyurek, A. (1996). How children talk about a conversation. Journal of Child Language, 23(3), 693-714. doi:10.1017/S0305000900009004.

    Abstract

    This study investigates how children of different ages talk about a conversation that they have witnessed. 48 Turkish children, five, nine and thirteen years in age, saw a televised dialogue between two Sesame Street characters (Bert and Ernie). Afterward, they narrated what they had seen and heard. Their reports were analysed for the development of linguistic devices used to orient their listeners to the relevant properties of a conversational exchange. Each utterance in the child's narrative was analysed as to its conversational role: (1) whether the child used direct or indirect quotation frames; (2) whether the child marked the boundaries of conversational turns using speakers' names and (3) whether the child used a marker for pairing of utterances made by different speakers (agreement-disagreement, request-refusal, questioning-answering). Within pairings, children's use of (a) the temporal and evaluative connectivity markers and (b) the kind of verb of saying were identified. The data indicate that there is a developmental change in children's ability to use appropriate linguistic means to orient their listeners to the different properties of a conversation. The development and use of these linguistic means enable the child to establish different social roles in a narrative interaction. The findings are interpreted in terms of the child's social-communicative development from being a ' character' to becoming a ' narrator' and ' author' of the reported conversation in the narrative situation.
  • Paracchini, S., Thomas, A., Castro, S., Lai, C., Paramasivam, M., Wang, Y., Keating, B. J., Taylor, J. M., Hacking, D. F., Scerri, T., Francks, C., Richardson, A. J., Wade-Martins, R., Stein, J. F., Knight, J. C., Copp, A. J., LoTurco, J., & Monaco, A. P. (2006). The chromosome 6p22 haplotype associated with dyslexia reduces the expression of KIAA0319, a novel gene involved in neuronal migration. Human Molecular Genetics, 15(10), 1659-1666. doi:10.1093/hmg/ddl089.

    Abstract

    Dyslexia is one of the most prevalent childhood cognitive disorders, affecting approximately 5% of school-age children. We have recently identified a risk haplotype associated with dyslexia on chromosome 6p22.2 which spans the TTRAP gene and portions of THEM2 and KIAA0319. Here we show that in the presence of the risk haplotype, the expression of the KIAA0319 gene is reduced but the expression of the other two genes remains unaffected. Using in situ hybridization, we detect a very distinct expression pattern of the KIAA0319 gene in the developing cerebral neocortex of mouse and human fetuses. Moreover, interference with rat Kiaa0319 expression in utero leads to impaired neuronal migration in the developing cerebral neocortex. These data suggest a direct link between a specific genetic background and a biological mechanism leading to the development of dyslexia: the risk haplotype on chromosome 6p22.2 down-regulates the KIAA0319 gene which is required for neuronal migration during the formation of the cerebral neocortex.
  • Parkes, L. M., Bastiaansen, M. C. M., & Norris, D. G. (2006). Combining EEG and fMRI to investigate the postmovement beta rebound. NeuroImage, 29(3), 685-696. doi:10.1016/j.neuroimage.2005.08.018.

    Abstract

    The relationship between synchronous neuronal activity as measured with EEG and the blood oxygenation level dependent (BOLD) signal as measured during fMRI is not clear. This work investigates the relationship by combining EEG and fMRI measures of the strong increase in beta frequency power following movement, the so-called post-movement beta rebound (PMBR). The time course of the PMBR, as measured by EEG, was included as a regressor in the fMRI analysis, allowing identification of a region of associated BOLD signal increase in the sensorimotor cortex, with the most significant region in the post-central sulcus. The increase in the BOLD signal suggests that the number of active neurons and/or their synaptic rate is increased during the PMBR. The duration of the BOLD response curve in the PMBR region is significantly longer than in the activated motor region, and is well fitted by a model including both motor and PMBR regressors. An intersubject correlation between the BOLD signal amplitude associated with the PMBR regressor and the PMBR strength as measured with EEG provides further evidence that this region is a source of the PMBR. There is a strong intra-subject correlation between the BOLD signal amplitude in the sensorimotor cortex during movement and the PMBR strength as measured by EEG, suggesting either that the motor activity itself, or somatosensory inputs associated with the motor activity, influence the PMBR. This work provides further evidence for a BOLD signal change associated with changes in neuronal synchrony, so opening up the possibility of studying other event-related oscillatory changes using fMRI.
  • Parlatini, V., Itahashi, T., Lee, Y., Liu, S., Nguyen, T. T., Aoki, Y. Y., Forkel, S. J., Catani, M., Rubia, K., Zhou, J. H., Murphy, D. G., & Cortese, S. (2023). White matter alterations in Attention-Deficit/Hyperactivity Disorder (ADHD): a systematic review of 129 diffusion imaging studies with meta-analysis. Molecular Psychiatry, 28, 4098-4123. doi:10.1038/s41380-023-02173-1.

    Abstract

    Aberrant anatomical brain connections in attention-deficit/hyperactivity disorder (ADHD) are reported inconsistently across
    diffusion weighted imaging (DWI) studies. Based on a pre-registered protocol (Prospero: CRD42021259192), we searched PubMed,
    Ovid, and Web of Knowledge until 26/03/2022 to conduct a systematic review of DWI studies. We performed a quality assessment
    based on imaging acquisition, preprocessing, and analysis. Using signed differential mapping, we meta-analyzed a subset of the
    retrieved studies amenable to quantitative evidence synthesis, i.e., tract-based spatial statistics (TBSS) studies, in individuals of any
    age and, separately, in children, adults, and high-quality datasets. Finally, we conducted meta-regressions to test the effect of age,
    sex, and medication-naïvety. We included 129 studies (6739 ADHD participants and 6476 controls), of which 25 TBSS studies
    provided peak coordinates for case-control differences in fractional anisotropy (FA)(32 datasets) and 18 in mean diffusivity (MD)(23
    datasets). The systematic review highlighted white matter alterations (especially reduced FA) in projection, commissural and
    association pathways of individuals with ADHD, which were associated with symptom severity and cognitive deficits. The meta-
    analysis showed a consistent reduced FA in the splenium and body of the corpus callosum, extending to the cingulum. Lower FA
    was related to older age, and case-control differences did not survive in the pediatric meta-analysis. About 68% of studies were of
    low quality, mainly due to acquisitions with non-isotropic voxels or lack of motion correction; and the sensitivity analysis in high-
    quality datasets yielded no significant results. Findings suggest prominent alterations in posterior interhemispheric connections
    subserving cognitive and motor functions affected in ADHD, although these might be influenced by non-optimal acquisition
    parameters/preprocessing. Absence of findings in children may be related to the late development of callosal fibers, which may
    enhance case-control differences in adulthood. Clinicodemographic and methodological differences were major barriers to
    consistency and comparability among studies, and should be addressed in future investigations.
  • Passmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G. and 4 morePassmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G., Ross, R. M., Thomas-Colquhoun, E., Evans, N., & Jordan, F. M. (2023). Kinbank: A global database of kinship terminology. PLOS ONE, 18: e0283218. doi:10.1371/journal.pone.0283218.

    Abstract

    For a single species, human kinship organization is both remarkably diverse and strikingly organized. Kinship terminology is the structured vocabulary used to classify, refer to, and address relatives and family. Diversity in kinship terminology has been analyzed by anthropologists for over 150 years, although recurrent patterning across cultures remains incompletely explained. Despite the wealth of kinship data in the anthropological record, comparative studies of kinship terminology are hindered by data accessibility. Here we present Kinbank, a new database of 210,903 kinterms from a global sample of 1,229 spoken languages. Using open-access and transparent data provenance, Kinbank offers an extensible resource for kinship terminology, enabling researchers to explore the rich diversity of human family organization and to test longstanding hypotheses about the origins and drivers of recurrent patterns. We illustrate our contribution with two examples. We demonstrate strong gender bias in the phonological structure of parent terms across 1,022 languages, and we show that there is no evidence for a coevolutionary relationship between cross-cousin marriage and bifurcate-merging terminology in Bantu languages. Analysing kinship data is notoriously challenging; Kinbank aims to eliminate data accessibility issues from that challenge and provide a platform to build an interdisciplinary understanding of kinship.

    Additional information

    Supporting Information
  • Paterson, K. B., Liversedge, S. P., Rowland, C. F., & Filik, R. (2003). Children's comprehension of sentences with focus particles. Cognition, 89(3), 263-294. doi:10.1016/S0010-0277(03)00126-4.

    Abstract

    We report three studies investigating children's and adults' comprehension of sentences containing the focus particle only. In Experiments 1 and 2, four groups of participants (6–7 years, 8–10 years, 11–12 years and adult) compared sentences with only in different syntactic positions against pictures that matched or mismatched events described by the sentence. Contrary to previous findings (Crain, S., Ni, W., & Conway, L. (1994). Learning, parsing and modularity. In C. Clifton, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing. Hillsdale, NJ: Lawrence Erlbaum; Philip, W., & Lynch, E. (1999). Felicity, relevance, and acquisition of the grammar of every and only. In S. C. Howell, S. A. Fish, & T. Keith-Lucas (Eds.), Proceedings of the 24th annual Boston University conference on language development. Somerville, MA: Cascadilla Press) we found that young children predominantly made errors by failing to process contrast information rather than errors in which they failed to use syntactic information to restrict the scope of the particle. Experiment 3 replicated these findings with pre-schoolers.
  • Paulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M. and 5 morePaulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M., Zoonomia Consortium, Rojas, D., Dávalos, L. M., Lindblad-Toh, K., Karlsson, E. K., & Ray, D. A. (2023). Chiropterans are a hotspot for horizontal transfer of DNA transposons in mammalia. Molecular Biology and Evolution, 40(5): msad092. doi:10.1093/molbev/msad092.

    Abstract

    Horizontal transfer of transposable elements (TEs) is an important mechanism contributing to genetic diversity and innovation. Bats (order Chiroptera) have repeatedly been shown to experience horizontal transfer of TEs at what appears to be a high rate compared with other mammals. We investigated the occurrence of horizontally transferred (HT) DNA transposons involving bats. We found over 200 putative HT elements within bats; 16 transposons were shared across distantly related mammalian clades, and 2 other elements were shared with a fish and two lizard species. Our results indicate that bats are a hotspot for horizontal transfer of DNA transposons. These events broadly coincide with the diversification of several bat clades, supporting the hypothesis that DNA transposon invasions have contributed to genetic diversification of bats.

    Additional information

    supplemental methods supplemental tables

Share this page