Publications

Displaying 301 - 400 of 594
  • Lausberg, H., & Kita, S. (2002). Dissociation of right and left hand gesture spaces in split-brain patients. Cortex, 38(5), 883-886. doi:10.1016/S0010-9452(08)70062-5.

    Abstract

    The present study investigates hemispheric specialisation in the use of space in communicative gestures. For this purpose, we investigate split-brain patients in whom spontaneous and distinct right hand gestures can only be controlled by the left hemisphere and vice versa, the left hand only by the right hemisphere. On this anatomical basis, we can infer hemispheric specialisation from the performances of the right and left hands. In contrast to left hand dyspraxia in tasks that require language processing, split-brain patients utilise their left hands in a meaningful way in visuo-constructive tasks such as copying drawings or block-design. Therefore, we conjecture that split-brain patients are capable of using their left hands for the communication of the content of visuo-spatial animations via gestural demonstration. On this basis, we further examine the use of space in communicative gestures by the right and left hands. McNeill and Pedelty (1995) noted for the split-brain patient N.G. that her iconic right hand gestures were exclusively displayed in the right personal space. The present study investigates systematically if there is indication for neglect of the left personal space in right hand gestures in split-brain patients.
  • Lee, C., Jessop, A., Bidgood, A., Peter, M. S., Pine, J. M., Rowland, C. F., & Durrant, S. (2023). How executive functioning, sentence processing, and vocabulary are related at 3 years of age. Journal of Experimental Child Psychology, 233: 105693. doi:10.1016/j.jecp.2023.105693.

    Abstract

    There is a wealth of evidence demonstrating that executive function (EF) abilities are positively associated with language development during the preschool years, such that children with good executive functions also have larger vocabularies. However, why this is the case remains to be discovered. In this study, we focused on the hypothesis that sentence processing abilities mediate the association between EF skills and receptive vocabulary knowledge, in that the speed of language acquisition is at least partially dependent on a child’s processing ability, which is itself dependent on executive control. We tested this hypothesis in longitudinal data from a cohort of 3- and 4-year-old children at three age points (37, 43, and 49 months). We found evidence, consistent with previous research, for a significant association between three EF skills (cognitive flexibility, working memory [as measured by the Backward Digit Span], and inhibition) and receptive vocabulary knowledge across this age range. However, only one of the tested sentence processing abilities (the ability to maintain multiple possible referents in mind) significantly mediated this relationship and only for one of the tested EFs (inhibition). The results suggest that children who are better able to inhibit incorrect responses are also better able to maintain multiple possible referents in mind while a sentence unfolds, a sophisticated sentence processing ability that may facilitate vocabulary learning from complex input.

    Additional information

    table S1 code and data
  • Lehecka, T. (2023). Normative ratings for 111 Swedish nouns and corresponding picture stimuli. Nordic Journal of Linguistics, 46(1), 20-45. doi:10.1017/S0332586521000123.

    Abstract

    Normative ratings are a means to control for the effects of confounding variables in psycholinguistic experiments. This paper introduces a new dataset of normative ratings for Swedish encompassing 111 concrete nouns and the corresponding picture stimuli in the MultiPic database (Duñabeitia et al. 2017). The norms for name agreement, category typicality, age of acquisition and subjective frequency were collected using online surveys among native speakers of the Finland-Swedish variety of Swedish. The paper discusses the inter-correlations between these variables and compares them against available ratings for other languages. In doing so, the paper argues that ratings for age of acquisition and subjective frequency collected for other languages may be applied to psycholinguistic studies on Finland-Swedish, at least with respect to concrete and highly imageable nouns. In contrast, norms for name agreement should be collected from speakers of the same language variety as represented by the subjects in the actual experiments.
  • Lei, A., Willems, R. M., & Eekhof, L. S. (2023). Emotions, fast and slow: Processing of emotion words is affected by individual differences in need for affect and narrative absorption. Cognition and Emotion, 37(5), 997-1005. doi:10.1080/02699931.2023.2216445.

    Abstract

    Emotional words have consistently been shown to be processed differently than neutral words. However, few studies have examined individual variability in emotion word processing with longer, ecologically valid stimuli (beyond isolated words, sentences, or paragraphs). In the current study, we re-analysed eye-tracking data collected during story reading to reveal how individual differences in need for affect and narrative absorption impact the speed of emotion word reading. Word emotionality was indexed by affective-aesthetic potentials (AAP) calculated by a sentiment analysis tool. We found that individuals with higher levels of need for affect and narrative absorption read positive words more slowly. On the other hand, these individual differences did not influence the reading time of more negative words, suggesting that high need for affect and narrative absorption are characterised by a positivity bias only. In general, unlike most previous studies using more isolated emotion word stimuli, we observed a quadratic (U-shaped) effect of word emotionality on reading speed, such that both positive and negative words were processed more slowly than neutral words. Taken together, this study emphasises the importance of taking into account individual differences and task context when studying emotion word processing.
  • Lemaitre, H., Le Guen, Y., Tilot, A. K., Stein, J. L., Philippe, C., Mangin, J.-F., Fisher, S. E., & Frouin, V. (2023). Genetic variations within human gained enhancer elements affect human brain sulcal morphology. NeuroImage, 265: 119773. doi:10.1016/j.neuroimage.2022.119773.

    Abstract

    The expansion of the cerebral cortex is one of the most distinctive changes in the evolution of the human brain. Cortical expansion and related increases in cortical folding may have contributed to emergence of our capacities for high-order cognitive abilities. Molecular analysis of humans, archaic hominins, and non-human primates has allowed identification of chromosomal regions showing evolutionary changes at different points of our phylogenetic history. In this study, we assessed the contributions of genomic annotations spanning 30 million years to human sulcal morphology measured via MRI in more than 18,000 participants from the UK Biobank. We found that variation within brain-expressed human gained enhancers, regulatory genetic elements that emerged since our last common ancestor with Old World monkeys, explained more trait heritability than expected for the left and right calloso-marginal posterior fissures and the right central sulcus. Intriguingly, these are sulci that have been previously linked to the evolution of locomotion in primates and later on bipedalism in our hominin ancestors.

    Additional information

    tables
  • De León, L., & Levinson, S. C. (Eds.). (1992). Space in Mesoamerican languages [Special Issue]. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 45(6).
  • Levelt, W. J. M. (2002). Picture naming and word frequency: Comments on Alario, Costa and Caramazza, Language and Cognitive Processes, 17(3), 299-319. Language and Cognitive Processes, 17(6), 663-671. doi:0.1080/01690960143000443.

    Abstract

    This commentary on Alario et al. (2002) addresses two issues: (1) Different from what the authors suggest, there are no theories of production claiming the phonological word to be the upper bound of advance planning before the onset of articulation; (2) Their picture naming study of word frequency effects on speech onset is inconclusive by lack of a crucial control, viz., of object recognition latency. This is a perennial problem in picture naming studies of word frequency and age of acquisition effects
  • Levelt, W. J. M. (1992). Accessing words in speech production: Stages, processes and representations. Cognition, 42, 1-22. doi:10.1016/0010-0277(92)90038-J.

    Abstract

    This paper introduces a special issue of Cognition on lexical access in speech production. Over the last quarter century, the psycholinguistic study of speaking, and in particular of accessing words in speech, received a major new impetus from the analysis of speech errors, dysfluencies and hesitations, from aphasiology, and from new paradigms in reaction time research. The emerging theoretical picture partitions the accessing process into two subprocesses, the selection of an appropriate lexical item (a “lemma”) from the mental lexicon, and the phonological encoding of that item, that is, the computation of a phonetic program for the item in the context of utterance. These two theoretical domains are successively introduced by outlining some core issues that have been or still have to be addressed. The final section discusses the controversial question whether phonological encoding can affect lexical selection. This partitioning is also followed in this special issue as a whole. There are, first, four papers on lexical selection, then three papers on phonological encoding, and finally one on the interaction between selection and phonological encoding.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M. (1992). Fairness in reviewing: A reply to O'Connell. Journal of Psycholinguistic Research, 21, 401-403.
  • Levelt, W. J. M. (1982). Het lineariseringsprobleem van de spreker. Tijdschrift voor Taal- en Tekstwetenschap (TTT), 2(1), 1-15.
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M. (1992). Sprachliche Musterbildung und Mustererkennung. Nova Acta Leopoldina NF, 67(281), 357-370.
  • Levelt, W. J. M., & Kelter, S. (1982). Surface form and memory in question answering. Cognitive Psychology, 14, 78-106. doi:10.1016/0010-0285(82)90005-6.

    Abstract

    Speakers tend to repeat materials from previous talk. This tendency is experimentally established and manipulated in various question-answering situations. It is shown that a question's surface form can affect the format of the answer given, even if this form has little semantic or conversational consequence, as in the pair Q: (At) what time do you close. A: “(At)five o'clock.” Answerers tend to match the utterance to the prepositional (nonprepositional) form of the question. This “correspondence effect” may diminish or disappear when, following the question, additional verbal material is presented to the answerer. The experiments show that neither the articulatory buffer nor long-term memory is normally involved in this retention of recent speech. Retaining recent speech in working memory may fulfill a variety of functions for speaker and listener, among them the correct production and interpretation of surface anaphora. Reusing recent materials may, moreover, be more economical than regenerating speech anew from a semantic base, and thus contribute to fluency. But the realization of this strategy requires a production system in which linguistic formulation can take place relatively independent of, and parallel to, conceptual planning.
  • Levelt, W. J. M. (1982). Science policy: Three recent idols, and a goddess. IPO Annual Progress Report, 17, 32-35.
  • Levelt, W. J. M. (1992). The perceptual loop theory not disconfirmed: A reply to MacKay. Consciousness and Cognition, 1, 226-230. doi:10.1016/1053-8100(92)90062-F.

    Abstract

    In his paper, MacKay reviews his Node Structure theory of error detection, but precedes it with a critical discussion of the Perceptual Loop theory of self-monitoring proposed in Levelt (1983, 1989). The present commentary is concerned with this latter critique and shows that there are more than casual problems with MacKay’s argumentation.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levelt, W. J. M. (1982). Zelfcorrecties in het spreekproces. KNAW: Mededelingen van de afdeling letterkunde, nieuwe reeks, 45(8), 215-228.
  • Levinson, S. C., Kita, S., Haun, D. B. M., & Rasch, B. H. (2002). Returning the tables: Language affects spatial reasoning. Cognition, 84(2), 155-188. doi:10.1016/S0010-0277(02)00045-8.

    Abstract

    Li and Gleitman (Turning the tables: language and spatial reasoning. Cognition, in press) seek to undermine a large-scale cross-cultural comparison of spatial language and cognition which claims to have demonstrated that language and conceptual coding in the spatial domain covary (see, for example, Space in language and cognition: explorations in linguistic diversity. Cambridge: Cambridge University Press, in press; Language 74 (1998) 557): the most plausible interpretation is that different languages induce distinct conceptual codings. Arguing against this, Li and Gleitman attempt to show that in an American student population they can obtain any of the relevant conceptual codings just by varying spatial cues, holding language constant. They then argue that our findings are better interpreted in terms of ecologically-induced distinct cognitive styles reflected in language. Linguistic coding, they argue, has no causal effects on non-linguistic thinking – it simply reflects antecedently existing conceptual distinctions. We here show that Li and Gleitman did not make a crucial distinction between frames of spatial reference relevant to our line of research. We report a series of experiments designed to show that they have, as a consequence, misinterpreted the results of their own experiments, which are in fact in line with our hypothesis. Their attempts to reinterpret the large cross-cultural study, and to enlist support from animal and infant studies, fail for the same reasons. We further try to discern exactly what theory drives their presumption that language can have no cognitive efficacy, and conclude that their position is undermined by a wide range of considerations.
  • Levinson, S. C. (2002). Time for a linguistic anthropology of time. Current Anthropology, 43(4), S122-S123. doi:10.1086/342214.
  • Levinson, S. C. (1992). Primer for the field investigation of spatial description and conception. Pragmatics, 2(1), 5-47.
  • Levinson, S. C., & Burenhult, N. (2009). Semplates: A new concept in lexical semantics? Language, 85, 153-174. doi:10.1353/lan.0.0090.

    Abstract

    This short report draws attention to an interesting kind of configuration in the lexicon that seems to have escaped theoretical or systematic descriptive attention. These configurations, which we dub SEMPLATES, consist of an abstract structure or template, which is recurrently instantiated in a number of lexical sets, typically of different form classes. A number of examples from different language families are adduced, and generalizations made about the nature of semplates, which are contrasted to other, perhaps similar, phenomena
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (2023). Gesture, spatial cognition and the evolution of language. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210481. doi:10.1098/rstb.2021.0481.

    Abstract

    Human communication displays a striking contrast between the diversity of languages and the universality of the principles underlying their use in conversation. Despite the importance of this interactional base, it is not obvious that it heavily imprints the structure of languages. However, a deep-time perspective suggests that early hominin communication was gestural, in line with all the other Hominidae. This gestural phase of early language development seems to have left its traces in the way in which spatial concepts, implemented in the hippocampus, provide organizing principles at the heart of grammar.
  • Levshina, N., Namboodiripad, S., Allassonnière-Tang, M., Kramer, M., Talamo, L., Verkerk, A., Wilmoth, S., Garrido Rodriguez, G., Gupton, T. M., Kidd, E., Liu, Z., Naccarato, C., Nordlinger, R., Panova, A., & Stoynova, N. (2023). Why we need a gradient approach to word order. Linguistics, 61(4), 825-883. doi:10.1515/ling-2021-0098.

    Abstract

    This article argues for a gradient approach to word order, which treats word order preferences, both within and across languages, as a continuous variable. Word order variability should be regarded as a basic assumption, rather than as something exceptional. Although this approach follows naturally from the emergentist usage-based view of language, we argue that it can be beneficial for all frameworks and linguistic domains, including language acquisition, processing, typology, language contact, language evolution and change, and formal approaches. Gradient approaches have been very fruitful in some domains, such as language processing, but their potential is not fully realized yet. This may be due to practical reasons. We discuss the most pressing methodological challenges in corpus-based and experimental research of word order and propose some practical solutions.
  • Lewis, A. G., Schoffelen, J.-M., Bastiaansen, M., & Schriefers, H. (2023). Is beta in agreement with the relatives? Using relative clause sentences to investigate MEG beta power dynamics during sentence comprehension. Psychophysiology, 60(10): e14332. doi:10.1111/psyp.14332.

    Abstract

    There remains some debate about whether beta power effects observed during sentence comprehension reflect ongoing syntactic unification operations (beta-syntax hypothesis), or instead reflect maintenance or updating of the sentence-level representation (beta-maintenance hypothesis). In this study, we used magnetoencephalography to investigate beta power neural dynamics while participants read relative clause sentences that were initially ambiguous between a subject- or an object-relative reading. An additional condition included a grammatical violation at the disambiguation point in the relative clause sentences. The beta-maintenance hypothesis predicts a decrease in beta power at the disambiguation point for unexpected (and less preferred) object-relative clause sentences and grammatical violations, as both signal a need to update the sentence-level representation. While the beta-syntax hypothesis also predicts a beta power decrease for grammatical violations due to a disruption of syntactic unification operations, it instead predicts an increase in beta power for the object-relative clause condition because syntactic unification at the point of disambiguation becomes more demanding. We observed decreased beta power for both the agreement violation and object-relative clause conditions in typical left hemisphere language regions, which provides compelling support for the beta-maintenance hypothesis. Mid-frontal theta power effects were also present for grammatical violations and object-relative clause sentences, suggesting that violations and unexpected sentence interpretations are registered as conflicts by the brain's domain-general error detection system.

    Additional information

    data
  • Liljeström, M., Hulten, A., Parkkonen, L., & Salmelin, R. (2009). Comparing MEG and fMRI views to naming actions and objects. Human Brain Mapping, 30, 1845-1856. doi:10.1002/hbm.20785.

    Abstract

    Most neuroimaging studies are performed using one imaging method only, either functional magnetic resonance imaging (fMRI), electroencephalography (EEG), or magnetoencephalography (MEG). Information on both location and timing has been sought by recording fMRI and EEG, simultaneously, or MEG and fMRI in separate sessions. Such approaches assume similar active areas whether detected via hemodynamic or electrophysiological signatures. Direct comparisons, after independent analysis of data from each imaging modality, have been conducted primarily on low-level sensory processing. Here, we report MEG (timing and location) and fMRI (location) results in 11 subjects when they named pictures that depicted an action or an object. The experimental design was exactly the same for the two imaging modalities. The MEG data were analyzed with two standard approaches: a set of equivalent current dipoles and a distributed minimum norm estimate. The fMRI blood-oxygenlevel dependent (BOLD) data were subjected to the usual random-effect contrast analysis. At the group level, MEG and fMRI data showed fairly good convergence, with both overall activation patterns and task effects localizing to comparable cortical regions. There were some systematic discrepancies, however, and the correspondence was less compelling in the individual subjects. The present analysis should be helpful in reconciling results of fMRI and MEG studies on high-level cognitive functions
  • Lingwood, J., Lampropoulou, S., De Bezena, C., Billington, J., & Rowland, C. F. (2023). Children’s engagement and caregivers’ use of language-boosting strategies during shared book reading: A mixed methods approach. Journal of Child Language, 50(6), 1436-1458. doi:10.1017/S0305000922000290.

    Abstract

    For shared book reading to be effective for language development, the adult and child need to be highly engaged. The current paper adopted a mixed-methods approach to investigate caregiver’s language-boosting behaviours and children’s engagement during shared book reading. The results revealed there were more instances of joint attention and caregiver’s use of prompts during moments of higher engagement. However, instances of most language-boosting behaviours were similar across episodes of higher and lower engagement. Qualitative analysis assessing the link between children’s engagement and caregiver’s use of speech acts, revealed that speech acts do seem to contribute to high engagement, in combination with other aspects of the interaction.
  • Liszkowski, U., Schäfer, M., Carpenter, M., & Tomasello, M. (2009). Prelinguistic infants, but not chimpanzees, communicate about absent entities. Psychological Science, 20, 654-660.

    Abstract

    One of the defining features of human language is displacement, the ability to make reference to absent entities. Here we show that prelinguistic, 12-month-old infants already can use a nonverbal pointing gesture to make reference to absent entities. We also show that chimpanzees—who can point for things they want humans to give them—do not point to refer to absent entities in the same way. These results demonstrate that the ability to communicate about absent but mutually known entities depends not on language, but rather on deeper social-cognitive skills that make acts of linguistic reference possible in the first place. These nonlinguistic skills for displaced reference emerged apparently only after humans' divergence from great apes some 6 million years ago.
  • Lumaca, M., Bonetti, L., Brattico, E., Baggio, G., Ravignani, A., & Vuust, P. (2023). High-fidelity transmission of auditory symbolic material is associated with reduced right–left neuroanatomical asymmetry between primary auditory regions. Cerebral Cortex, 33(11), 6902-6919. doi:10.1093/cercor/bhad009.

    Abstract

    The intergenerational stability of auditory symbolic systems, such as music, is thought to rely on brain processes that allow the faithful transmission of complex sounds. Little is known about the functional and structural aspects of the human brain which support this ability, with a few studies pointing to the bilateral organization of auditory networks as a putative neural substrate. Here, we further tested this hypothesis by examining the role of left–right neuroanatomical asymmetries between auditory cortices. We collected neuroanatomical images from a large sample of participants (nonmusicians) and analyzed them with Freesurfer’s surface-based morphometry method. Weeks after scanning, the same individuals participated in a laboratory experiment that simulated music transmission: the signaling games. We found that high accuracy in the intergenerational transmission of an artificial tone system was associated with reduced rightward asymmetry of cortical thickness in Heschl’s sulcus. Our study suggests that the high-fidelity copying of melodic material may rely on the extent to which computational neuronal resources are distributed across hemispheres. Our data further support the role of interhemispheric brain organization in the cultural transmission and evolution of auditory symbolic systems.
  • Maess, B., Friederici, A. D., Damian, M., Meyer, A. S., & Levelt, W. J. M. (2002). Semantic category interference in overt picture naming: Sharpening current density localization by PCA. Journal of Cognitive Neuroscience, 14(3), 455-462. doi:10.1162/089892902317361967.

    Abstract

    The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same- or mixedcategory items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixedcategory condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production.
  • Majid, A. (2002). Frames of reference and language concepts. Trends in Cognitive Sciences, 6(12), 503-504. doi:10.1016/S1364-6613(02)02024-7.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2002). The influence of animacy on relative clause processing. Journal of Memory and Language, 47(1), 50-68. doi:10.1006/jmla.2001.2837.

    Abstract

    In previous research it has been shown that subject relative clauses are easier to process than object relative clauses. Several theories have been proposed that explain the difference on the basis of different theoretical perspectives. However, previous research tested relative clauses only with animate protagonists. In a corpus study of Dutch and German newspaper texts, we show that animacy is an important determinant of the distribution of subject and object relative clauses. In two experiments in Dutch, in which the animacy of the object of the relative clause is varied, no difference in reading time is obtained between subject and object relative clauses when the object is inanimate. The experiments show that animacy influences the processing difficulty of relative clauses. These results can only be accounted for by current major theories of relative clause processing when additional assumptions are introduced, and at the same time show that the possibility of semantically driven analysis can be considered as a serious alternative.
  • Mak, M., Faber, M., & Willems, R. M. (2023). Different kinds of simulation during literary reading: Insights from a combined fMRI and eye-tracking study. Cortex, 162, 115-135. doi:10.1016/j.cortex.2023.01.014.

    Abstract

    Mental simulation is an important aspect of narrative reading. In a previous study, we found that gaze durations are differentially impacted by different kinds of mental simulation. Motor simulation, perceptual simulation, and mentalizing as elicited by literary short stories influenced eye movements in distinguishable ways (Mak & Willems, 2019). In the current study, we investigated the existence of a common neural locus for these different kinds of simulation. We additionally investigated whether individual differences during reading, as indexed by the eye movements, are reflected in domain-specific activations in the brain. We found a variety of brain areas activated by simulation-eliciting content, both modality-specific brain areas and a general simulation area. Individual variation in percent signal change in activated areas was related to measures of story appreciation as well as personal characteristics (i.e., transportability, perspective taking). Taken together, these findings suggest that mental simulation is supported by both domain-specific processes grounded in previous experiences, and by the neural mechanisms that underlie higher-order language processing (e.g., situation model building, event indexing, integration).

    Additional information

    figures localizer tasks appendix C1
  • Mamus, E., Speed, L. J., Rissman, L., Majid, A., & Özyürek, A. (2023). Lack of visual experience affects multimodal language production: Evidence from congenitally blind and sighted people. Cognitive Science, 47(1): e13228. doi:10.1111/cogs.13228.

    Abstract

    The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
  • Mamus, E., Speed, L., Özyürek, A., & Majid, A. (2023). The effect of input sensory modality on the multimodal encoding of motion events. Language, Cognition and Neuroscience, 38(5), 711-723. doi:10.1080/23273798.2022.2141282.

    Abstract

    Each sensory modality has different affordances: vision has higher spatial acuity than audition, whereas audition has better temporal acuity. This may have consequences for the encoding of events and its subsequent multimodal language production—an issue that has received relatively little attention to date. In this study, we compared motion events presented as audio-only, visual-only, or multimodal (visual + audio) input and measured speech and co-speech gesture depicting path and manner of motion in Turkish. Input modality affected speech production. Speakers with audio-only input produced more path descriptions and fewer manner descriptions in speech compared to speakers who received visual input. In contrast, the type and frequency of gestures did not change across conditions. Path-only gestures dominated throughout. Our results suggest that while speech is more susceptible to auditory vs. visual input in encoding aspects of motion events, gesture is less sensitive to such differences.

    Additional information

    Supplemental material
  • Manhardt, F., Brouwer, S., Van Wijk, E., & Özyürek, A. (2023). Word order preference in sign influences speech in hearing bimodal bilinguals but not vice versa: Evidence from behavior and eye-gaze. Bilingualism: Language and Cognition, 26(1), 48-61. doi:10.1017/S1366728922000311.

    Abstract

    We investigated cross-modal influences between speech and sign in hearing bimodal bilinguals, proficient in a spoken and a sign language, and its consequences on visual attention during message preparation using eye-tracking. We focused on spatial expressions in which sign languages, unlike spoken languages, have a modality-driven preference to mention grounds (big objects) prior to figures (smaller objects). We compared hearing bimodal bilinguals’ spatial expressions and visual attention in Dutch and Dutch Sign Language (N = 18) to those of their hearing non-signing (N = 20) and deaf signing peers (N = 18). In speech, hearing bimodal bilinguals expressed more ground-first descriptions and fixated grounds more than hearing non-signers, showing influence from sign. In sign, they used as many ground-first descriptions as deaf signers and fixated grounds equally often, demonstrating no influence from speech. Cross-linguistic influence of word order preference and visual attention in hearing bimodal bilinguals appears to be one-directional modulated by modality-driven differences.
  • Marlow, A. J., Fisher, S. E., Richardson, A. J., Francks, C., Talcott, J. B., Monaco, A. P., Stein, J. F., & Cardon, L. R. (2002). Investigation of quantitative measures related to reading disability in a large sample of sib-pairs from the UK. Behavior Genetics, 31(2), 219-230. doi:10.1023/A:1010209629021.

    Abstract

    We describe a family-based sample of individuals with reading disability collected as part of a quantitative trait loci (QTL) mapping study. Eighty-nine nuclear families (135 independent sib-pairs) were identified through a single proband using a traditional discrepancy score of predicted/actual reading ability and a known family history. Eight correlated psychometric measures were administered to each sibling, including single word reading, spelling, similarities, matrices, spoonerisms, nonword and irregular word reading, and a pseudohomophone test. Summary statistics for each measure showed a reduced mean for the probands compared to the co-sibs, which in turn was lower than that of the population. This partial co-sib regression back to the mean indicates that the measures are influenced by familial factors and therefore, may be suitable for a mapping study. The variance of each of the measures remained largely unaffected, which is reassuring for the application of a QTL approach. Multivariate genetic analysis carried out to explore the relationship between the measures identified a common factor between the reading measures that accounted for 54% of the variance. Finally the familiality estimates (range 0.32–0.73) obtained for the reading measures including the common factor (0.68) supported their heritability. These findings demonstrate the viability of this sample for QTL mapping, and will assist in the interpretation of any subsequent linkage findings in an ongoing genome scan.
  • Martin, A. E., & McElree, B. (2009). Memory operations that support language comprehension: Evidence from verb-phrase ellipsis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(5), 1231-1239. doi:10.1037/a0016271.

    Abstract

    Comprehension of verb-phrase ellipsis (VPE) requires reevaluation of recently processed constituents, which often necessitates retrieval of information about the elided constituent from memory. A. E. Martin and B. McElree (2008) argued that representations formed during comprehension are content addressable and that VPE antecedents are retrieved from memory via a cue-dependent direct-access pointer rather than via a search process. This hypothesis was further tested by manipulating the location of interfering material—either before the onset of the antecedent (proactive interference; PI) or intervening between antecedent and ellipsis site (retroactive interference; RI). The speed–accuracy tradeoff procedure was used to measure the time course of VPE processing. The location of the interfering material affected VPE comprehension accuracy: RI conditions engendered lower accuracy than PI conditions. Crucially, location did not affect the speed of processing VPE, which is inconsistent with both forward and backward search mechanisms. The observed time-course profiles are consistent with the hypothesis that VPE antecedents are retrieved via a cue-dependent direct-access operation. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
  • Maskalenka, K., Alagöz, G., Krueger, F., Wright, J., Rostovskaya, M., Nakhuda, A., Bendall, A., Krueger, C., Walker, S., Scally, A., & Rugg-Gunn, P. J. (2023). NANOGP1, a tandem duplicate of NANOG, exhibits partial functional conservation in human naïve pluripotent stem cells. Development, 150(2): dev201155. doi:10.1242/dev.201155.

    Abstract

    Gene duplication events can drive evolution by providing genetic material for new gene functions, and they create opportunities for diverse developmental strategies to emerge between species. To study the contribution of duplicated genes to human early development, we examined the evolution and function of NANOGP1, a tandem duplicate of the transcription factor NANOG. We found that NANOGP1 and NANOG have overlapping but distinct expression profiles, with high NANOGP1 expression restricted to early epiblast cells and naïve-state pluripotent stem cells. Sequence analysis and epitope-tagging revealed that NANOGP1 is protein coding with an intact homeobox domain. The duplication that created NANOGP1 occurred earlier in primate evolution than previously thought and has been retained only in great apes, whereas Old World monkeys have disabled the gene in different ways, including homeodomain point mutations. NANOGP1 is a strong inducer of naïve pluripotency; however, unlike NANOG, it is not required to maintain the undifferentiated status of human naïve pluripotent cells. By retaining expression, sequence and partial functional conservation with its ancestral copy, NANOGP1 exemplifies how gene duplication and subfunctionalisation can contribute to transcription factor activity in human pluripotency and development.
  • Massaro, D. W., & Jesse, A. (2009). Read my lips: Speech distortions in musical lyrics can be overcome (slightly) by facial information. Speech Communication, 51(7), 604-621. doi:10.1016/j.specom.2008.05.013.

    Abstract

    Understanding the lyrics of many contemporary songs is difficult, and an earlier study [Hidalgo-Barnes, M., Massaro, D.W., 2007. Read my lips: an animated face helps communicate musical lyrics. Psychomusicology 19, 3–12] showed a benefit for lyrics recognition when seeing a computer-animated talking head (Baldi) mouthing the lyrics along with hearing the singer. However, the contribution of visual information was relatively small compared to what is usually found for speech. In the current experiments, our goal was to determine why the face appears to contribute less when aligned with sung lyrics than when aligned with normal speech presented in noise. The first experiment compared the contribution of the talking head with the originally sung lyrics versus the case when it was aligned with the Festival text-to-speech synthesis (TtS) spoken at the original duration of the song’s lyrics. A small and similar influence of the face was found in both conditions. In the three experiments, we compared the presence of the face when the durations of the TtS were equated with the duration of the original musical lyrics to the case when the lyrics were read with typical TtS durations and this speech embedded in noise. The results indicated that the unusual temporally distorted durations of musical lyrics decreases the contribution of the visible speech from the face.
  • Mauner, G., Melinger, A., Koenig, J.-P., & Bienvenue, B. (2002). When is schematic participant information encoded: Evidence from eye-monitoring. Journal of Memory and Language, 47(3), 386-406. doi:10.1016/S0749-596X(02)00009-8.

    Abstract

    Two eye-monitoring studies examined when unexpressed schematic participant information specified by verbs is used during sentence processing. Experiment 1 compared the processing of sentences with passive and intransitive verbs hypothesized to introduce or not introduce, respectively, an agent when their main clauses were preceded by either agent-dependent rationale clauses or adverbial clause controls. While there were no differences in the processing of passive clauses following rationale and control clauses, intransitive verb clauses elicited anomaly effects following agent-dependent rationale clauses. To determine whether the source of this immediately available schematic participant information is lexically specified or instead derived solely from conceptual sources associated with verbs, Experiment 2 compared the processing of clauses with passive and middle verbs following rationale clauses (e.g., To raise money for the charity, the vase was/had sold quickly…). Although both passive and middle verb forms denote situations that logically require an agent, middle verbs, which by hypothesis do not lexically specify an agent, elicited longer processing times than passive verbs in measures of early processing. These results demonstrate that participants access and interpret lexically encoded schematic participant information in the process of recognizing a verb.
  • Mazzini, S., Holler, J., & Drijvers, L. (2023). Studying naturalistic human communication using dual-EEG and audio-visual recordings. STAR Protocols, 4(3): 102370. doi:10.1016/j.xpro.2023.102370.

    Abstract

    We present a protocol to study naturalistic human communication using dual-EEG and audio-visual recordings. We describe preparatory steps for data collection including setup preparation, experiment design, and piloting. We then describe the data collection process in detail which consists of participant recruitment, experiment room preparation, and data collection. We also outline the kinds of research questions that can be addressed with the current protocol, including several analysis possibilities, from conversational to advanced time-frequency analyses.
    For complete details on the use and execution of this protocol, please refer to Drijvers and Holler (2022).
  • McConnell, K. (2023). Individual Differences in Holistic and Compositional Language Processing. Journal of Cognition, 6. doi:10.5334/joc.283.

    Abstract

    Individual differences in cognitive abilities are ubiquitous across the spectrum of proficient language users. Although speakers differ with regard to their memory capacity, ability for inhibiting distraction, and ability to shift between different processing levels, comprehension is generally successful. However, this does not mean it is identical across individuals; listeners and readers may rely on different processing strategies to exploit distributional information in the service of efficient understanding. In the following psycholinguistic reading experiment, we investigate potential sources of individual differences in the processing of co-occurring words. Participants read modifier-noun bigrams like absolute silence in a self-paced reading task. Backward transition probability (BTP) between the two lexemes was used to quantify the prominence of the bigram as a whole in comparison to the frequency of its parts. Of five individual difference measures (processing speed, verbal working memory, cognitive inhibition, global-local scope shifting, and personality), two proved to be significantly associated with the effect of BTP on reading times. Participants who could inhibit a distracting global environment in order to more efficiently retrieve a single part and those that preferred the local level in the shifting task showed greater effects of the co-occurrence probability of the parts. We conclude that some participants are more likely to retrieve bigrams via their parts and their co-occurrence statistics whereas others more readily retrieve the two words together as a single chunked unit.
  • McLean, B., Dunn, M., & Dingemanse, M. (2023). Two measures are better than one: Combining iconicity ratings and guessing experiments for a more nuanced picture of iconicity in the lexicon. Language and Cognition, 15(4), 719-739. doi:10.1017/langcog.2023.9.

    Abstract

    Iconicity in language is receiving increased attention from many fields, but our understanding of iconicity is only as good as the measures we use to quantify it. We collected iconicity measures for 304 Japanese words from English-speaking participants, using rating and guessing tasks. The words included ideophones (structurally marked depictive words) along with regular lexical items from similar semantic domains (e.g., fuwafuwa ‘fluffy’, jawarakai ‘soft’). The two measures correlated, speaking to their validity. However, ideophones received consistently higher iconicity ratings than other items, even when guessed at the same accuracies, suggesting the rating task is more sensitive to cues like structural markedness that frame words as iconic. These cues did not always guide participants to the meanings of ideophones in the guessing task, but they did make them more confident in their guesses, even when they were wrong. Consistently poor guessing results reflect the role different experiences play in shaping construals of iconicity. Using multiple measures in tandem allows us to explore the interplay between iconicity and these external factors. To facilitate this, we introduce a reproducible workflow for creating rating and guessing tasks from standardised wordlists, while also making improvements to the robustness, sensitivity and discriminability of previous approaches.
  • McQueen, J. M., Jesse, A., & Norris, D. (2009). No lexical–prelexical feedback during speech perception or: Is it time to stop playing those Christmas tapes? Journal of Memory and Language, 61, 1-18. doi:10.1016/j.jml.2009.03.002.

    Abstract

    The strongest support for feedback in speech perception comes from evidence of apparent lexical influence on prelexical fricative-stop compensation for coarticulation. Lexical knowledge (e.g., that the ambiguous final fricative of Christma? should be [s]) apparently influences perception of following stops. We argue that all such previous demonstrations can be explained without invoking lexical feedback. In particular, we show that one demonstration [Magnuson, J. S., McMurray, B., Tanenhaus, M. K., & Aslin, R. N. (2003). Lexical effects on compensation for coarticulation: The ghost of Christmash past. Cognitive Science, 27, 285–298] involved experimentally-induced biases (from 16 practice trials) rather than feedback. We found that the direction of the compensation effect depended on whether practice stimuli were words or nonwords. When both were used, there was no lexically-mediated compensation. Across experiments, however, there were lexical effects on fricative identification. This dissociation (lexical involvement in the fricative decisions but not in the following stop decisions made on the same trials) challenges interactive models in which feedback should cause both effects. We conclude that the prelexical level is sensitive to experimentally-induced phoneme-sequence biases, but that there is no feedback during speech perception.
  • McQueen, J. M., Jesse, A., & Mitterer, H. (2023). Lexically mediated compensation for coarticulation still as elusive as a white christmash. Cognitive Science: a multidisciplinary journal, 47(9): e13342. doi:10.1111/cogs.13342.

    Abstract

    Luthra, Peraza-Santiago, Beeson, Saltzman, Crinnion, and Magnuson (2021) present data from the lexically mediated compensation for coarticulation paradigm that they claim provides conclusive evidence in favor of top-down processing in speech perception. We argue here that this evidence does not support that conclusion. The findings are open to alternative explanations, and we give data in support of one of them (that there is an acoustic confound in the materials). Lexically mediated compensation for coarticulation thus remains elusive, while prior data from the paradigm instead challenge the idea that there is top-down processing in online speech recognition.

    Additional information

    supplementary materials
  • Mead, S., Poulter, M., Uphill, J., Beck, J., Whitfield, J., Webb, T. E., Campbell, T., Adamson, G., Deriziotis, P., Tabrizi, S. J., Hummerich, H., Verzilli, C., Alpers, M. P., Whittaker, J. C., & Collinge, J. (2009). Genetic risk factors for variant Creutzfeldt-Jakob disease: A genome-wide association study. Lancet Neurology, 8(1), 57-66. doi:10.1016/S1474-4422(08)70265-5.

    Abstract

    BACKGROUND: Human and animal prion diseases are under genetic control, but apart from PRNP (the gene that encodes the prion protein), we understand little about human susceptibility to bovine spongiform encephalopathy (BSE) prions, the causal agent of variant Creutzfeldt-Jakob disease (vCJD).METHODS: We did a genome-wide association study of the risk of vCJD and tested for replication of our findings in samples from many categories of human prion disease (929 samples) and control samples from the UK and Papua New Guinea (4254 samples), including controls in the UK who were genotyped by the Wellcome Trust Case Control Consortium. We also did follow-up analyses of the genetic control of the clinical phenotype of prion disease and analysed candidate gene expression in a mouse cellular model of prion infection. FINDINGS: The PRNP locus was strongly associated with risk across several markers and all categories of prion disease (best single SNP [single nucleotide polymorphism] association in vCJD p=2.5 x 10(-17); best haplotypic association in vCJD p=1 x 10(-24)). Although the main contribution to disease risk was conferred by PRNP polymorphic codon 129, another nearby SNP conferred increased risk of vCJD. In addition to PRNP, one technically validated SNP association upstream of RARB (the gene that encodes retinoic acid receptor beta) had nominal genome-wide significance (p=1.9 x 10(-7)). A similar association was found in a small sample of patients with iatrogenic CJD (p=0.030) but not in patients with sporadic CJD (sCJD) or kuru. In cultured cells, retinoic acid regulates the expression of the prion protein. We found an association with acquired prion disease, including vCJD (p=5.6 x 10(-5)), kuru incubation time (p=0.017), and resistance to kuru (p=2.5 x 10(-4)), in a region upstream of STMN2 (the gene that encodes SCG10). The risk genotype was not associated with sCJD but conferred an earlier age of onset. Furthermore, expression of Stmn2 was reduced 30-fold post-infection in a mouse cellular model of prion disease. INTERPRETATION: The polymorphic codon 129 of PRNP was the main genetic risk factor for vCJD; however, additional candidate loci have been identified, which justifies functional analyses of these biological pathways in prion disease.
  • Melinger, A. (2002). Foot structure and accent in Seneca. International Journal of American Linguistics, 68(3), 287-315.

    Abstract

    Argues that the Seneca accent system can be explained more simply and naturally if the foot structure is reanalyzed as trochaic. Determination of the position of the accent by the position and structure of the accented syllable and by the position and structure of the post-tonic syllable; Assignment of the pair of syllables which interact to predict where accent is assigned in different iambic feet.
  • Menenti, L., Petersson, K. M., Scheeringa, R., & Hagoort, P. (2009). When elephants fly: Differential sensitivity of right and left inferior frontal gyri to discourse and world knowledge. Journal of Cognitive Neuroscience, 21, 2358-2368. doi:10.1162/jocn.2008.21163.

    Abstract

    Both local discourse and world knowledge are known to influence sentence processing. We investigated how these two sources of information conspire in language comprehension. Two types of critical sentences, correct and world knowledge anomalies, were preceded by either a neutral or a local context. The latter made the world knowledge anomalies more acceptable or plausible. We predicted that the effect of world knowledge anomalies would be weaker for the local context. World knowledge effects have previously been observed in the left inferior frontal region (Brodmann's area 45/47). In the current study, an effect of world knowledge was present in this region in the neutral context. We also observed an effect in the right inferior frontal gyrus, which was more sensitive to the discourse manipulation than the left inferior frontal gyrus. In addition, the left angular gyrus reacted strongly to the degree of discourse coherence between the context and critical sentence. Overall, both world knowledge and the discourse context affect the process of meaning unification, but do so by recruiting partly different sets of brain areas.
  • Menon, S., Rosenberg, K., Graham, S. A., Ward, E. M., Taylor, M. E., Drickamer, K., & Leckband, D. E. (2009). Binding-site geometry and flexibility in DC-SIGN demonstrated with surface force measurements. Proceedings of the National Academy of Sciences of the United States of America, 106, 11524-11529. doi:10.1073/pnas.0901783106.

    Abstract

    The dendritic cell receptor DC-SIGN mediates pathogen recognition by binding to glycans characteristic of pathogen surfaces, including those found on HIV. Clustering of carbohydrate-binding sites in the receptor tetramer is believed to be critical for targeting of pathogen glycans, but the arrangement of these sites remains poorly understood. Surface force measurements between apposed lipid bilayers displaying the extracellular domain of DC-SIGN and a neoglycolipid bearing an oligosaccharide ligand provide evidence that the receptor is in an extended conformation and that glycan docking is associated with a conformational change that repositions the carbohydrate-recognition domains during ligand binding. The results further show that the lateral mobility of membrane-bound ligands enhances the engagement of multiple carbohydrate-recognition domains in the receptor oligomer with appropriately spaced ligands. These studies highlight differences between pathogen targeting by DC-SIGN and receptors in which binding sites at fixed spacing bind to simple molecular patterns

    Additional information

    Menon_2009_Supporting_Information.pdf
  • Meyer, A. S. (1992). Investigation of phonological encoding through speech error analyses: Achievements, limitations, and alternatives. Cognition, 42, 181-211. doi:10.1016/0010-0277(92)90043-H.

    Abstract

    Phonological encoding in language production can be defined as a set of processes generating utterance forms on the basis of semantic and syntactic information. Most evidence about these processes stems from analyses of sound errors. In section 1 of this paper, certain important results of these analyses are reviewed. Two prominent models of phonological encoding, which are mainly based on speech error evidence, are discussed in section 2. In section 3, limitations of speech error analyses are discussed, and it is argued that detailed and comprehensive models of phonological encoding cannot be derived solely on the basis of error analyses. As is argued in section 4, a new research strategy is required. Instead of using the properties of errors to draw inferences about the generation of correct word forms, future research should directly investigate the normal process of phonological encoding.
  • Meyer, A. S., & Bock, K. (1992). The tip-of-the-tongue phenomenon: Blocking or partial activation? Memory and Cognition, 20, 181-211.

    Abstract

    Tip-of-the-tongue states may represent the momentary unavailability of an otherwise accessible word or the weak activation of an otherwise inaccessible word. In three experiments designed to address these alternative views, subjects attempted to retrieve rare target words from their definitions. The definitions were followed by cues that were related to the targets in sound, by cues that were related in meaning, and by cues that were not related to the targets. Experiment 1 found that compared with unrelated cues, related cue words that were presented immediately after target definitions helped rather than hindered lexical retrieval, and that sound cues were more effective retrieval aids than meaning cues. Experiment 2 replicated these results when cues were presented after an initial target-retrieval attempt. These findings reverse a previous one (Jones, 1989) that was reproduced in Experiment 3 and shown to stem from a small group of unusually difficult target definitions.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Meyer, A. S. (2023). Timing in conversation. Journal of Cognition, 6(1), 1-17. doi:10.5334/joc.268.

    Abstract

    Turn-taking in everyday conversation is fast, with median latencies in corpora of conversational speech often reported to be under 300 ms. This seems like magic, given that experimental research on speech planning has shown that speakers need much more time to plan and produce even the shortest of utterances. This paper reviews how language scientists have combined linguistic analyses of conversations and experimental work to understand the skill of swift turn-taking and proposes a tentative solution to the riddle of fast turn-taking.
  • Mickan, A., McQueen, J. M., Brehm, L., & Lemhöfer, K. (2023). Individual differences in foreign language attrition: A 6-month longitudinal investigation after a study abroad. Language, Cognition and Neuroscience, 38(1), 11-39. doi:10.1080/23273798.2022.2074479.

    Abstract

    While recent laboratory studies suggest that the use of competing languages is a driving force in foreign language (FL) attrition (i.e. forgetting), research on “real” attriters has failed to demonstrate
    such a relationship. We addressed this issue in a large-scale longitudinal study, following German students throughout a study abroad in Spain and their first six months back in Germany. Monthly,
    percentage-based frequency of use measures enabled a fine-grained description of language use.
    L3 Spanish forgetting rates were indeed predicted by the quantity and quality of Spanish use, and
    correlated negatively with L1 German and positively with L2 English letter fluency. Attrition rates
    were furthermore influenced by prior Spanish proficiency, but not by motivation to maintain
    Spanish or non-verbal long-term memory capacity. Overall, this study highlights the importance
    of language use for FL retention and sheds light on the complex interplay between language
    use and other determinants of attrition.
  • Mishra, C., Offrede, T., Fuchs, S., Mooshammer, C., & Skantze, G. (2023). Does a robot’s gaze aversion affect human gaze aversion? Frontiers in Robotics and AI, 10: 1127626. doi:10.3389/frobt.2023.1127626.

    Abstract

    Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot’s gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot’s lack of gaze aversion.
  • Mishra, C., Verdonschot, R. G., Hagoort, P., & Skantze, G. (2023). Real-time emotion generation in human-robot dialogue using large language models. Frontiers in Robotics and AI, 10: 1271610. doi:10.3389/frobt.2023.1271610.

    Abstract

    Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot’s turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot’s emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.
  • Mitterer, H., & McQueen, J. M. (2009). Foreign subtitles help but native-language subtitles harm foreign speech perception. PLoS ONE, 4(11), e7785. doi:10.1371/journal.pone.0007785.

    Abstract

    Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.
  • Mitterer, H., & McQueen, J. M. (2009). Processing reduced word-forms in speech perception using probabilistic knowledge about speech production. Journal of Experimental Psychology: Human Perception and Performance, 35(1), 244-263. doi:10.1037/a0012730.

    Abstract

    Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tas, bag, or a reduced-/t/ version of tast, touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4 printed words, each associated with a geometrical shape. Minimal pairs (e.g., tas/tast) were either both above (boven) or both next to (naast) different shapes. Spoken instructions (e.g., “Klik op het woordje tas boven de ster,” [Click on the word bag above the star]) thus became unambiguous only on their final words. Prior to disambiguation, listeners' fixations were drawn to /t/-final words more when boven than when naast followed the ambiguous sequences. This behavior reflects Dutch speech-production data: /t/ is reduced more before /b/ than before /n/. We thus argue that probabilistic knowledge about the effect of following context in speech production is used prelexically in perception to help resolve lexical ambiguities caused by continuous-speech processes.
  • Mitterer, H., Horschig, J. M., Müsseler, J., & Majid, A. (2009). The influence of memory on perception: It's not what things look like, it's what you call them. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(6), 1557-1562. doi:10.1037/a0017019.

    Abstract

    World knowledge influences how we perceive the world. This study shows that this influence is at least partly mediated by declarative memory. Dutch and German participants categorized hues from a yellow-to-orange continuum on stimuli that were prototypically orange or yellow and that were also associated with these color labels. Both groups gave more “yellow” responses if an ambiguous hue occurred on a prototypically yellow stimulus. The language groups were also tested on a stimulus (traffic light) that is associated with the label orange in Dutch and with the label yellow in German, even though the objective color is the same for both populations. Dutch observers categorized this stimulus as orange more often than German observers, in line with the assumption that declarative knowledge mediates the influence of world knowledge on color categorization.

    Files private

    Request files
  • Monaghan, P., Donnelly, S., Alcock, K., Bidgood, A., Cain, K., Durrant, S., Frost, R. L. A., Jago, L. S., Peter, M. S., Pine, J. M., Turnbull, H., & Rowland, C. F. (2023). Learning to generalise but not segment an artificial language at 17 months predicts children’s language skills 3 years later. Cognitive Psychology, 147: 101607. doi:10.1016/j.cogpsych.2023.101607.

    Abstract

    We investigated whether learning an artificial language at 17 months was predictive of children’s natural language vocabulary and grammar skills at 54 months. Children at 17 months listened to an artificial language containing non-adjacent dependencies, and were then tested on their learning to segment and to generalise the structure of the language. At 54 months, children were then tested on a range of standardised natural language tasks that assessed receptive and expressive vocabulary and grammar. A structural equation model demonstrated that learning the artificial language generalisation at 17 months predicted language abilities – a composite of vocabulary and grammar skills – at 54 months, whereas artificial language segmentation at 17 months did not predict language abilities at this age. Artificial language learning tasks – especially those that probe grammar learning – provide a valuable tool for uncovering the mechanisms driving children’s early language development.

    Additional information

    supplementary data
  • Mooijman, S., Schoonen, R., Ruiter, M. B., & Roelofs, A. (2023). Voluntary and cued language switching in late bilingual speakers. Bilingualism: Language and Cognition. Advance online publication. doi:10.1017/S1366728923000755.

    Abstract

    Previous research examining the factors that determine language choice and voluntary switching mainly involved early bilinguals. Here, using picture naming, we investigated language choice and switching in late Dutch–English bilinguals. We found that naming was overall slower in cued than in voluntary switching, but switch costs occurred in both types of switching. The magnitude of switch costs differed depending on the task and language, and was moderated by L2 proficiency. Self-rated rather than objectively assessed proficiency predicted voluntary switching and ease of lexical access was associated with language choice. Between-language and within-language switch costs were not correlated. These results highlight self-rated proficiency as a reliable predictor of voluntary switching, with language modulating switch costs. As in early bilinguals, ease of lexical access was related to word-level language choice of late bilinguals.
  • Morison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T. and 3 moreMorison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T., Fisher, S. E., Zweier, C., & Morgan, A. T. (2023). In-depth characterisation of a cohort of individuals with missense and loss-of-function variants disrupting FOXP2. Journal of Medical Genetics, 60(6), 597-607. doi:10.1136/jmg-2022-108734.

    Abstract

    Background
    Heterozygous disruptions of FOXP2 were the first identified molecular cause for severe speech disorder; childhood apraxia of speech (CAS), yet few cases have been reported, limiting knowledge of the condition.

    Methods
    Here we phenotyped 29 individuals from 18 families with pathogenic FOXP2-only variants (13 loss-of-function, 5 missense variants; 14 males; aged 2 years to 62 years). Health and development (cognitive, motor, social domains) was examined, including speech and language outcomes with the first cross-linguistic analysis of English and German.

    Results
    Speech disorders were prevalent (24/26, 92%) and CAS was most common (23/26, 89%), with similar speech presentations across English and German. Speech was still impaired in adulthood and some speech sounds (e.g. ‘th’, ‘r’, ‘ch’, ‘j’) were never acquired. Language impairments (22/26, 85%) ranged from mild to severe. Comorbidities included feeding difficulties in infancy (10/27, 37%), fine (14/27, 52%) and gross (14/27, 52%) motor impairment, anxiety (6/28, 21%), depression (7/28, 25%), and sleep disturbance (11/15, 44%). Physical features were common (23/28, 82%) but with no consistent pattern. Cognition ranged from average to mildly impaired, and was incongruent with language ability; for example, seven participants with severe language disorder had average non-verbal cognition.

    Conclusions
    Although we identify increased prevalence of conditions like anxiety, depression and sleep disturbance, we confirm that the consequences of FOXP2 dysfunction remain relatively specific to speech disorder, as compared to other recently identified monogenic conditions associated with CAS. Thus, our findings reinforce that FOXP2 provides a valuable entrypoint for examining the neurobiological bases of speech disorder.
  • Muhinyi, A., & Rowland, C. F. (2023). Contributions of abstract extratextual talk and interactive style to preschoolers’ vocabulary development. Journal of Child Language, 50(1), 198-213. doi:10.1017/S0305000921000696.

    Abstract

    Caregiver abstract talk during shared reading predicts preschool-age children’s vocabulary development. However, previous research has focused on level of abstraction with less consideration of the style of extratextual talk. Here, we investigated the relation between these two dimensions of extratextual talk, and their contributions to variance in children’s vocabulary skills. Caregiver level of abstraction was associated with an interactive reading style. Controlling for socioeconomic status and child age, high interactivity predicted children’s concurrent vocabulary skills whereas abstraction did not. Controlling for earlier vocabulary skills, neither dimension of the extratextual talk predicted later vocabulary. Theoretical and practical relevance are discussed.
  • Need, A. C., Ge, D., Weale, M. E., Maia, J., Feng, S., Heinzen, E. L., Shianna, K. V., Yoon, W., Kasperavičiūtė, D., Gennarelli, M., Strittmatter, W. J., Bonvicini, C., Rossi, G., Jayathilake, K., Cola, P. A., McEvoy, J. P., Keefe, R. S. E., Fisher, E. M. C., St. Jean, P. L., Giegling, I. and 13 moreNeed, A. C., Ge, D., Weale, M. E., Maia, J., Feng, S., Heinzen, E. L., Shianna, K. V., Yoon, W., Kasperavičiūtė, D., Gennarelli, M., Strittmatter, W. J., Bonvicini, C., Rossi, G., Jayathilake, K., Cola, P. A., McEvoy, J. P., Keefe, R. S. E., Fisher, E. M. C., St. Jean, P. L., Giegling, I., Hartmann, A. M., Möller, H.-J., Ruppert, A., Fraser, G., Crombie, C., Middleton, L. T., St. Clair, D., Roses, A. D., Muglia, P., Francks, C., Rujescu, D., Meltzer, H. Y., & Goldstein, D. B. (2009). A genome-wide investigation of SNPs and CNVs in schizophrenia. PLoS Genetics, 5(2), e1000373. doi:10.1371/journal.pgen.1000373.

    Abstract

    We report a genome-wide assessment of single nucleotide polymorphisms (SNPs) and copy number variants (CNVs) in schizophrenia. We investigated SNPs using 871 patients and 863 controls, following up the top hits in four independent cohorts comprising 1,460 patients and 12,995 controls, all of European origin. We found no genome-wide significant associations, nor could we provide support for any previously reported candidate gene or genome-wide associations. We went on to examine CNVs using a subset of 1,013 cases and 1,084 controls of European ancestry, and a further set of 60 cases and 64 controls of African ancestry. We found that eight cases and zero controls carried deletions greater than 2 Mb, of which two, at 8p22 and 16p13.11-p12.4, are newly reported here. A further evaluation of 1,378 controls identified no deletions greater than 2 Mb, suggesting a high prior probability of disease involvement when such deletions are observed in cases. We also provide further evidence for some smaller, previously reported, schizophrenia-associated CNVs, such as those in NRXN1 and APBA2. We could not provide strong support for the hypothesis that schizophrenia patients have a significantly greater “load” of large (>100 kb), rare CNVs, nor could we find common CNVs that associate with schizophrenia. Finally, we did not provide support for the suggestion that schizophrenia-associated CNVs may preferentially disrupt genes in neurodevelopmental pathways. Collectively, these analyses provide the first integrated study of SNPs and CNVs in schizophrenia and support the emerging view that rare deleterious variants may be more important in schizophrenia predisposition than common polymorphisms. While our analyses do not suggest that implicated CNVs impinge on particular key pathways, we do support the contribution of specific genomic regions in schizophrenia, presumably due to recurrent mutation. On balance, these data suggest that very few schizophrenia patients share identical genomic causation, potentially complicating efforts to personalize treatment regimens.
  • Newbury, D. F., Cleak, J. D., Ishikawa-Brush, Y., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Bolton, P. F., Jannoun, L., Slonims, V., Baird, G., Pickles, A., Bishop, D. V. M., Helms., P. J., & The SLI Consortium (2002). A genomewide scan identifies two novel loci involved in specific language impairment. American Journal of Human Genetics, 70(2), 384-398. doi:10.1086/338649.

    Abstract

    Approximately 4% of English-speaking children are affected by specific language impairment (SLI), a disorder in the development of language skills despite adequate opportunity and normal intelligence. Several studies have indicated the importance of genetic factors in SLI; a positive family history confers an increased risk of development, and concordance in monozygotic twins consistently exceeds that in dizygotic twins. However, like many behavioral traits, SLI is assumed to be genetically complex, with several loci contributing to the overall risk. We have compiled 98 families drawn from epidemiological and clinical populations, all with probands whose standard language scores fall ⩾1.5 SD below the mean for their age. Systematic genomewide quantitative-trait–locus analysis of three language-related measures (i.e., the Clinical Evaluation of Language Fundamentals–Revised [CELF-R] receptive and expressive scales and the nonword repetition [NWR] test) yielded two regions, one on chromosome 16 and one on 19, that both had maximum LOD scores of 3.55. Simulations suggest that, of these two multipoint results, the NWR linkage to chromosome 16q is the most significant, with empirical P values reaching 10−5, under both Haseman-Elston (HE) analysis (LOD score 3.55; P=.00003) and variance-components (VC) analysis (LOD score 2.57; P=.00008). Single-point analyses provided further support for involvement of this locus, with three markers, under the peak of linkage, yielding LOD scores >1.9. The 19q locus was linked to the CELF-R expressive-language score and exceeds the threshold for suggestive linkage under all types of analysis performed—multipoint HE analysis (LOD score 3.55; empirical P=.00004) and VC (LOD score 2.84; empirical P=.00027) and single-point HE analysis (LOD score 2.49) and VC (LOD score 2.22). Furthermore, both the clinical and epidemiological samples showed independent evidence of linkage on both chromosome 16q and chromosome 19q, indicating that these may represent universally important loci in SLI and, thus, general risk factors for language impairment.
  • Newbury, D. F., Winchester, L., Addis, L., Paracchini, S., Buckingham, L.-L., Clark, A., Cohen, W., Cowie, H., Dworzynski, K., Everitt, A., Goodyer, I. M., Hennessy, E., Kindley, A. D., Miller, L. L., Nasir, J., O'Hare, A., Shaw, D., Simkin, Z., Simonoff, E., Slonims, V. and 11 moreNewbury, D. F., Winchester, L., Addis, L., Paracchini, S., Buckingham, L.-L., Clark, A., Cohen, W., Cowie, H., Dworzynski, K., Everitt, A., Goodyer, I. M., Hennessy, E., Kindley, A. D., Miller, L. L., Nasir, J., O'Hare, A., Shaw, D., Simkin, Z., Simonoff, E., Slonims, V., Watson, J., Ragoussis, J., Fisher, S. E., Seckl, J. R., Helms, P. J., Bolton, P. F., Pickles, A., Conti-Ramsden, G., Baird, G., Bishop, D. V., & Monaco, A. P. (2009). CMIP and ATP2C2 modulate phonological short-term memory in language impairment. American Journal of Human Genetics, 85(2), 264-272. doi:10.1016/j.ajhg.2009.07.004.

    Abstract

    Specific language impairment (SLI) is a common developmental disorder haracterized by difficulties in language acquisition despite otherwise normal development and in the absence of any obvious explanatory factors. We performed a high-density screen of SLI1, a region of chromosome 16q that shows highly significant and consistent linkage to nonword repetition, a measure of phonological short-term memory that is commonly impaired in SLI. Using two independent language-impaired samples, one family-based (211 families) and another selected from a population cohort on the basis of extreme language measures (490 cases), we detected association to two genes in the SLI1 region: that encoding c-maf-inducing protein (CMIP, minP = 5.5 × 10−7 at rs6564903) and that encoding calcium-transporting ATPase, type2C, member2 (ATP2C2, minP = 2.0 × 10−5 at rs11860694). Regression modeling indicated that each of these loci exerts an independent effect upon nonword repetition ability. Despite the consistent findings in language-impaired samples, investigation in a large unselected cohort (n = 3612) did not detect association. We therefore propose that variants in CMIP and ATP2C2 act to modulate phonological short-term memory primarily in the context of language impairment. As such, this investigation supports the hypothesis that some causes of language impairment are distinct from factors that influence normal language variation. This work therefore implicates CMIP and ATP2C2 in the etiology of SLI and provides molecular evidence for the importance of phonological short-term memory in language acquisition.

    Additional information

    mmc1.pdf
  • Newbury, D. F., Bonora, E., Lamb, J. A., Fisher, S. E., Lai, C. S. L., Baird, G., Jannoun, L., Slonims, V., Stott, C. M., Merricks, M. J., Bolton, P. F., Bailey, A. J., Monaco, A. P., & International Molecular Genetic Study of Autism Consortium (2002). FOXP2 is not a major susceptibility gene for autism or specific language impairment. American Journal of Human Genetics, 70(5), 1318-1327. doi:10.1086/339931.

    Abstract

    The FOXP2 gene, located on human 7q31 (at the SPCH1 locus), encodes a transcription factor containing a polyglutamine tract and a forkhead domain. FOXP2 is mutated in a severe monogenic form of speech and language impairment, segregating within a single large pedigree, and is also disrupted by a translocation in an isolated case. Several studies of autistic disorder have demonstrated linkage to a similar region of 7q (the AUTS1 locus), leading to the proposal that a single genetic factor on 7q31 contributes to both autism and language disorders. In the present study, we directly evaluate the impact of the FOXP2 gene with regard to both complex language impairments and autism, through use of association and mutation screening analyses. We conclude that coding-region variants in FOXP2 do not underlie the AUTS1 linkage and that the gene is unlikely to play a role in autism or more common forms of language impairment.
  • Newman-Norlund, S. E., Noordzij, M. L., Newman-Norlund, R. D., Volman, I. A., De Ruiter, J. P., Hagoort, P., & Toni, I. (2009). Recipient design in tacit communication. Cognition, 111, 46-54. doi:10.1016/j.cognition.2008.12.004.

    Abstract

    The ability to design tailored messages for specific listeners is an important aspect of
    human communication. The present study investigates whether a mere belief about an
    addressee’s identity influences the generation and production of a communicative message in
    a novel, non-verbal communication task. Participants were made to believe they were playing a game with a child or an adult partner, while a confederate acted as both child
    and adult partners with matched performance and response times. The participants’ belief
    influenced their behavior, spending longer when interacting with the presumed child
    addressee, but only during communicative portions of the game, i.e. using time as a tool
    to place emphasis on target information. This communicative adaptation attenuated with
    experience, and it was related to personality traits, namely Empathy and Need for Cognition
    measures. Overall, these findings indicate that novel nonverbal communicative interactions
    are selected according to a socio-centric perspective, and they are strongly
    influenced by participants’ traits.
  • Niemi, J., Laine, M., & Järvikivi, J. (2009). Paradigmatic and extraparadigmatic morphology in the mental lexicon: Experimental evidence for a dissociation. The mental lexicon, 4(1), 26-40. doi:10.1075/ml.4.1.02nie.

    Abstract

    The present study discusses psycholinguistic evidence for a difference between paradigmatic and extraparadigmatic morphology by investigating the processing of Finnish inflected and cliticized words. The data are derived from three sources of Finnish: from single-word reading performance in an agrammatic deep dyslectic speaker, as well as from visual lexical decision and wordness/learnability ratings of cliticized vs. inflected items by normal Finnish speakers. The agrammatic speaker showed awareness of the suffixes in multimorphemic words, including clitics, since he attempted to fill in this slot with morphological material. However, he never produced a clitic — either as the correct response or as an error — in any morphological configuration (simplex, derived, inflected, compound). Moreover, he produced more nominative singular errors for case-inflected nouns than he did for the cliticized words, a pattern that is expected if case-inflected forms were closely associated with their lexical heads, i.e., if they were paradigmatic and cliticized words were not. Furthermore, a visual lexical decision task with normal speakers of Finnish, showed an additional processing cost (longer latencies and more errors on cliticized than on case-inflected noun forms). Finally, a rating task indicated no difference in relative wordness between these two types of words. However, the same cliticized words were judged harder to learn as L2 items than the inflected words, most probably due to their conceptual/semantic properties, in other words due to their lack of word-level translation equivalents in SAVE languages. Taken together, the present results suggest that the distinction between paradigmatic and extraparadigmatic morphology is psychologically real.
  • Nijland, L., & Janse, E. (Eds.). (2009). Auditory processing in speakers with acquired or developmental language disorders [Special Issue]. Clinical Linguistics and Phonetics, 23(3).
  • Noordman, L. G. M., & Vonk, W. (1998). Memory-based processing in understanding causal information. Discourse Processes, 191-212. doi:10.1080/01638539809545044.

    Abstract

    The reading process depends both on the text and on the reader. When we read a text, propositions in the current input are matched to propositions in the memory representation of the previous discourse but also to knowledge structures in long‐term memory. Therefore, memory‐based text processing refers both to the bottom‐up processing of the text and to the top‐down activation of the reader's knowledge. In this article, we focus on the role of cognitive structures in the reader's knowledge. We argue that causality is an important category in structuring human knowledge and that this property has consequences for text processing. Some research is discussed that illustrates that the more the information in the text reflects causal categories, the more easily the information is processed.
  • Noordzij, M., Newman-Norlund, S. E., De Ruiter, J. P., Hagoort, P., Levinson, S. C., & Toni, I. (2009). Brain mechanisms underlying human communication. Frontiers in Human Neuroscience, 3:14. doi:10.3389/neuro.09.014.2009.

    Abstract

    Human communication has been described as involving the coding-decoding of a conventional symbol system, which could be supported by parts of the human motor system (i.e. the “mirror neurons system”). However, this view does not explain how these conventions could develop in the first place. Here we target the neglected but crucial issue of how people organize their non-verbal behavior to communicate a given intention without pre-established conventions. We have measured behavioral and brain responses in pairs of subjects during communicative exchanges occurring in a real, interactive, on-line social context. In two fMRI studies, we found robust evidence that planning new communicative actions (by a sender) and recognizing the communicative intention of the same actions (by a receiver) relied on spatially overlapping portions of their brains (the right posterior superior temporal sulcus). The response of this region was lateralized to the right hemisphere, modulated by the ambiguity in meaning of the communicative acts, but not by their sensorimotor complexity. These results indicate that the sender of a communicative signal uses his own intention recognition system to make a prediction of the intention recognition performed by the receiver. This finding supports the notion that our communicative abilities are distinct from both sensorimotor processes and language abilities.
  • Norris, D., McQueen, J. M., & Cutler, A. (2002). Bias effects in facilitatory phonological priming. Memory & Cognition, 30(3), 399-411.

    Abstract

    In four experiments, we examined the facilitation that occurs when spoken-word targets rhyme with preceding spoken primes. In Experiment 1, listeners’ lexical decisions were faster to words following rhyming words (e.g., ramp–LAMP) than to words following unrelated primes (e.g., pink–LAMP). No facilitation was observed for nonword targets. Targets that almost rhymed with their primes (foils; e.g., bulk–SULSH) were included in Experiment 2; facilitation for rhyming targets was severely attenuated. Experiments 3 and 4 were single-word shadowing variants of the earlier experiments. There was facilitation for both rhyming words and nonwords; the presence of foils had no significant influence on the priming effect. A major component of the facilitation in lexical decision appears to be strategic: Listeners are biased to say “yes” to targets that rhyme with their primes, unless foils discourage this strategy. The nonstrategic component of phonological facilitation may reflect speech perception processes that operate prior to lexical access.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Specific facial signals associate with categories of social actions conveyed through questions. PLoS One, 18(7): e0288104. doi:10.1371/journal.pone.0288104.

    Abstract

    The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker’s intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request “What time is it?”, an invitation “Will you come to my party?” or a criticism “Are you crazy?”). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction.

    Additional information

    supporting information
  • Nota, N., Trujillo, J. P., Jacobs, V., & Holler, J. (2023). Facilitating question identification through natural intensity eyebrow movements in virtual avatars. Scientific Reports, 13: 21295. doi:10.1038/s41598-023-48586-4.

    Abstract

    In conversation, recognizing social actions (similar to ‘speech acts’) early is important to quickly understand the speaker’s intended message and to provide a fast response. Fast turns are typical for fundamental social actions like questions, since a long gap can indicate a dispreferred response. In multimodal face-to-face interaction, visual signals may contribute to this fast dynamic. The face is an important source of visual signalling, and previous research found that prevalent facial signals such as eyebrow movements facilitate the rapid recognition of questions. We aimed to investigate whether early eyebrow movements with natural movement intensities facilitate question identification, and whether specific intensities are more helpful in detecting questions. Participants were instructed to view videos of avatars where the presence of eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) was manipulated, and to indicate whether the utterance in the video was a question or statement. Results showed higher accuracies for questions with eyebrow frowns, and faster response times for questions with eyebrow frowns and eyebrow raises. No additional effect was observed for the specific movement intensity. This suggests that eyebrow movements that are representative of naturalistic multimodal behaviour facilitate question recognition.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Conversational eyebrow frowns facilitate question identification: An online study using virtual avatars. Cognitive Science, 47(12): e13392. doi:10.1111/cogs.13392.

    Abstract

    Conversation is a time-pressured environment. Recognizing a social action (the ‘‘speech act,’’ such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers’ intentions.

    Additional information

    link to preprint
  • Nozais, V., Forkel, S. J., Petit, L., Talozzi, L., Corbetta, M., Thiebaut de Schotten, M., & Joliot, M. (2023). Atlasing white matter and grey matter joint contributions to resting-state networks in the human brain. Communications Biology, 6: 726. doi:10.1038/s42003-023-05107-3.

    Abstract

    Over the past two decades, the study of resting-state functional magnetic resonance imaging has revealed that functional connectivity within and between networks is linked to cognitive states and pathologies. However, the white matter connections supporting this connectivity remain only partially described. We developed a method to jointly map the white and grey matter contributing to each resting-state network (RSN). Using the Human Connectome Project, we generated an atlas of 30 RSNs. The method also highlighted the overlap between networks, which revealed that most of the brain’s white matter (89%) is shared between multiple RSNs, with 16% shared by at least 7 RSNs. These overlaps, especially the existence of regions shared by numerous networks, suggest that white matter lesions in these areas might strongly impact the communication within networks. We provide an atlas and an open-source software to explore the joint contribution of white and grey matter to RSNs and facilitate the study of the impact of white matter damage to these networks. In a first application of the software with clinical data, we were able to link stroke patients and impacted RSNs, showing that their symptoms aligned well with the estimated functions of the networks.
  • Numssen, O., van der Burght, C. L., & Hartwigsen, G. (2023). Revisiting the focality of non-invasive brain stimulation - implications for studies of human cognition. Neuroscience and Biobehavioral Reviews, 149: 105154. doi:10.1016/j.neubiorev.2023.105154.

    Abstract

    Non-invasive brain stimulation techniques are popular tools to investigate brain function in health and disease. Although transcranial magnetic stimulation (TMS) is widely used in cognitive neuroscience research to probe causal structure-function relationships, studies often yield inconclusive results. To improve the effectiveness of TMS studies, we argue that the cognitive neuroscience community needs to revise the stimulation focality principle – the spatial resolution with which TMS can differentially stimulate cortical regions. In the motor domain, TMS can differentiate between cortical muscle representations of adjacent fingers. However, this high degree of spatial specificity cannot be obtained in all cortical regions due to the influences of cortical folding patterns on the TMS-induced electric field. The region-dependent focality of TMS should be assessed a priori to estimate the experimental feasibility. Post-hoc simulations allow modeling of the relationship between cortical stimulation exposure and behavioral modulation by integrating data across stimulation sites or subjects.

    Files private

    Request files
  • Nyberg, L., Forkstam, C., Petersson, K. M., Cabeza, R., & Ingvar, M. (2002). Brain imaging of human memory systems: Between-systems similarities and within-system differences. Cognitive Brain Research, 13(2), 281-292. doi:10.1016/S0926-6410(02)00052-6.

    Abstract

    There is much evidence for the existence of multiple memory systems. However, it has been argued that tasks assumed to reflect different memory systems share basic processing components and are mediated by overlapping neural systems. Here we used multivariate analysis of PET-data to analyze similarities and differences in brain activity for multiple tests of working memory, semantic memory, and episodic memory. The results from two experiments revealed between-systems differences, but also between-systems similarities and within-system differences. Specifically, support was obtained for a task-general working-memory network that may underlie active maintenance. Premotor and parietal regions were salient components of this network. A common network was also identified for two episodic tasks, cued recall and recognition, but not for a test of autobiographical memory. This network involved regions in right inferior and polar frontal cortex, and lateral and medial parietal cortex. Several of these regions were also engaged during the working-memory tasks, indicating shared processing for episodic and working memory. Fact retrieval and synonym generation were associated with increased activity in left inferior frontal and middle temporal regions and right cerebellum. This network was also associated with the autobiographical task, but not with living/non-living classification, and may reflect elaborate retrieval of semantic information. Implications of the present results for the classification of memory tasks with respect to systems and/or processes are discussed.
  • Obleser, J., & Eisner, F. (2009). Pre-lexical abstraction of speech in the auditory cortex. Trends in Cognitive Sciences, 13, 14-19. doi:10.1016/j.tics.2008.09.005.

    Abstract

    Speech perception requires the decoding of complex acoustic patterns. According to most cognitive models of spoken word recognition, this complexity is dealt with before lexical access via a process of abstraction from the acoustic signal to pre-lexical categories. It is currently unclear how these categories are implemented in the auditory cortex. Recent advances in animal neurophysiology and human functional imaging have made it possible to investigate the processing of speech in terms of probabilistic cortical maps rather than simple cognitive subtraction, which will enable us to relate neurometric data more directly to behavioural studies. We suggest that integration of insights from cognitive science, neurophysiology and functional imaging is necessary for furthering our understanding of pre-lexical abstraction in the cortex.

    Files private

    Request files
  • O'Brien, D. P., & Bowerman, M. (1998). Martin D. S. Braine (1926–1996): Obituary. American Psychologist, 53, 563. doi:10.1037/0003-066X.53.5.563.

    Abstract

    Memorializes Martin D. S. Braine, whose research on child language acquisition and on both child and adult thinking and reasoning had a major influence on modern cognitive psychology. Addressing meaning as well as position, Braine argued that children start acquiring language by learning narrow-scope positional formulas that map components of meaning to positions in the utterance. These proposals were critical in starting discussions of the possible universality of the pivot-grammar stage and of the role of syntax, semantics,and pragmatics in children's early grammar and were pivotal to the rise of approaches in which cognitive development in language acquisition is stressed.
  • Ogasawara, N., & Warner, N. (2009). Processing missing vowels: Allophonic processing in Japanese. Language and Cognitive Processes, 24, 376 -411. doi:10.1080/01690960802084028.

    Abstract

    The acoustic realisation of a speech sound varies, often showing allophonic variation triggered by surrounding sounds. Listeners recognise words and sounds well despite such variation, and even make use of allophonic variability in processing. This study reports five experiments on processing of the reduced/unreduced allophonic alternation of Japanese high vowels. The results show that listeners use phonological knowledge of their native language during phoneme processing and word recognition. However, interactions of the phonological and acoustic effects differ in these two processes. A facilitatory phonological effect and an inhibitory acoustic effect cancel one another out in phoneme processing; while in word recognition, the facilitatory phonological effect overrides the inhibitory acoustic effect. Four potential models of the processing of allophonic variation are discussed. The results can be accommodated in two of them, but require additional assumptions or modifications to the models, and primarily support lexical specification of allophonic variability.

    Files private

    Request files
  • Oliveira‑Stahl, G., Farboud, S., Sterling, M. L., Heckman, J. J., Van Raalte, B., Lenferink, D., Van der Stam, A., Smeets, C. J. L. M., Fisher, S. E., & Englitz, B. (2023). High-precision spatial analysis of mouse courtship vocalization behavior reveals sex and strain differences. Scientific Reports, 13: 5219. doi:10.1038/s41598-023-31554-3.

    Abstract

    Mice display a wide repertoire of vocalizations that varies with sex, strain, and context. Especially during social interaction, including sexually motivated dyadic interaction, mice emit sequences of ultrasonic vocalizations (USVs) of high complexity. As animals of both sexes vocalize, a reliable attribution of USVs to their emitter is essential. The state-of-the-art in sound localization for USVs in 2D allows spatial localization at a resolution of multiple centimeters. However, animals interact at closer ranges, e.g. snout-to-snout. Hence, improved algorithms are required to reliably assign USVs. We present a novel algorithm, SLIM (Sound Localization via Intersecting Manifolds), that achieves a 2–3-fold improvement in accuracy (13.1–14.3 mm) using only 4 microphones and extends to many microphones and localization in 3D. This accuracy allows reliable assignment of 84.3% of all USVs in our dataset. We apply SLIM to courtship interactions between adult C57Bl/6J wildtype mice and those carrying a heterozygous Foxp2 variant (R552H). The improved spatial accuracy reveals that vocalization behavior is dependent on the spatial relation between the interacting mice. Female mice vocalized more in close snout-to-snout interaction while male mice vocalized more when the male snout was in close proximity to the female's ano-genital region. Further, we find that the acoustic properties of the ultrasonic vocalizations (duration, Wiener Entropy, and sound level) are dependent on the spatial relation between the interacting mice as well as on the genotype. In conclusion, the improved attribution of vocalizations to their emitters provides a foundation for better understanding social vocal behaviors.

    Additional information

    supplementary movies and figures
  • Orfanidou, E., Adam, R., McQueen, J. M., & Morgan, G. (2009). Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition, 37(3), 302-315. doi:10.3758/MC.37.3.302.

    Abstract

    Do all components of a sign contribute equally to its recognition? In the present study, misperceptions in the sign-spotting task (based on the word-spotting task; Cutler & Norris, 1988) were analyzed to address this question. Three groups of deaf signers of British Sign Language (BSL) with different ages of acquisition (AoA) saw BSL signs combined with nonsense signs, along with combinations of two nonsense signs. They were asked to spot real signs and report what they had spotted. We will present an analysis of false alarms to the nonsense-sign combinations—that is, misperceptions of nonsense signs as real signs (cf. van Ooijen, 1996). Participants modified the movement and handshape parameters more than the location parameter. Within this pattern, however, there were differences as a function of AoA. These results show that the theoretical distinctions between form-based parameters in sign-language models have consequences for online processing. Vowels and consonants have different roles in speech recognition; similarly, it appears that movement, handshape, and location parameters contribute differentially to sign recognition.
  • Otten, M., & Van Berkum, J. J. A. (2009). Does working memory capacity affect the ability to predict upcoming words in discourse? Brain Research, 1291, 92-101. doi:doi:10.1016/j.brainres.2009.07.042.

    Abstract

    Prior research has indicated that readers and listeners can use information in the prior discourse to rapidly predict specific upcoming words, as the text is unfolding. Here we used event-related potentials to explore whether the ability to make rapid online predictions depends on a reader's working memory capacity (WMC). Readers with low WMC were hypothesized to differ from high WMC readers either in their overall capability to make predictions (because of their lack of cognitive resources). High and low WMC participants read highly constraining stories that supported the prediction of a specific noun, mixed with coherent but essentially unpredictive ‘prime control’ control stories that contained the same content words as the predictive stories. To test whether readers were anticipating upcoming words, critical nouns were preceded by a determiner whose gender agreed or disagreed with the gender of the expected noun. In predictive stories, both high and low WMC readers displayed an early negative deflection (300–600 ms) to unexpected determiners, which was not present in prime control stories. Only the low WMC participants displayed an additional later negativity (900–1500 ms) to unexpected determiners. This pattern of results suggests that WMC does not influence the ability to anticipate upcoming words per se, but does change the way in which readers deal with information that disconfirms the generated prediction.
  • Özer, D., Karadöller, D. Z., Özyürek, A., & Göksun, T. (2023). Gestures cued by demonstratives in speech guide listeners' visual attention during spatial language comprehension. Journal of Experimental Psychology: General, 152(9), 2623-2635. doi:10.1037/xge0001402.

    Abstract

    Gestures help speakers and listeners during communication and thinking, particularly for visual-spatial information. Speakers tend to use gestures to complement the accompanying spoken deictic constructions, such as demonstratives, when communicating spatial information (e.g., saying “The candle is here” and gesturing to the right side to express that the candle is on the speaker's right). Visual information conveyed by gestures enhances listeners’ comprehension. Whether and how listeners allocate overt visual attention to gestures in different speech contexts is mostly unknown. We asked if (a) listeners gazed at gestures more when they complement demonstratives in speech (“here”) compared to when they express redundant information to speech (e.g., “right”) and (b) gazing at gestures related to listeners’ information uptake from those gestures. We demonstrated that listeners fixated gestures more when they expressed complementary than redundant information in the accompanying speech. Moreover, overt visual attention to gestures did not predict listeners’ comprehension. These results suggest that the heightened communicative value of gestures as signaled by external cues, such as demonstratives, guides listeners’ visual attention to gestures. However, overt visual attention does not seem to be necessary to extract the cued information from the multimodal message.
  • Ozyurek, A. (2002). Do speakers design their co-speech gestures for their addresees? The effects of addressee location on representational gestures. Journal of Memory and Language, 46(4), 688-704. doi:10.1006/jmla.2001.2826.

    Abstract

    Do speakers use spontaneous gestures accompanying their speech for themselves or to communicate their message to their addressees? Two experiments show that speakers change the orientation of their gestures depending on the location of shared space, that is, the intersection of the gesture spaces of the speakers and addressees. Gesture orientations change more frequently when they accompany spatial prepositions such as into and out, which describe motion that has a beginning and end point, rather than across, which depicts an unbounded path across space. Speakers change their gestures so that they represent the beginning and end point of motion INTO or OUT by moving into or out of the shared space. Thus, speakers design their gestures for their addressees and therefore use them to communicate. This has implications for the view that gestures are a part of language use as well as for the role of gestures in speech production.
  • Parlatini, V., Itahashi, T., Lee, Y., Liu, S., Nguyen, T. T., Aoki, Y. Y., Forkel, S. J., Catani, M., Rubia, K., Zhou, J. H., Murphy, D. G., & Cortese, S. (2023). White matter alterations in Attention-Deficit/Hyperactivity Disorder (ADHD): a systematic review of 129 diffusion imaging studies with meta-analysis. Molecular Psychiatry, 28, 4098-4123. doi:10.1038/s41380-023-02173-1.

    Abstract

    Aberrant anatomical brain connections in attention-deficit/hyperactivity disorder (ADHD) are reported inconsistently across
    diffusion weighted imaging (DWI) studies. Based on a pre-registered protocol (Prospero: CRD42021259192), we searched PubMed,
    Ovid, and Web of Knowledge until 26/03/2022 to conduct a systematic review of DWI studies. We performed a quality assessment
    based on imaging acquisition, preprocessing, and analysis. Using signed differential mapping, we meta-analyzed a subset of the
    retrieved studies amenable to quantitative evidence synthesis, i.e., tract-based spatial statistics (TBSS) studies, in individuals of any
    age and, separately, in children, adults, and high-quality datasets. Finally, we conducted meta-regressions to test the effect of age,
    sex, and medication-naïvety. We included 129 studies (6739 ADHD participants and 6476 controls), of which 25 TBSS studies
    provided peak coordinates for case-control differences in fractional anisotropy (FA)(32 datasets) and 18 in mean diffusivity (MD)(23
    datasets). The systematic review highlighted white matter alterations (especially reduced FA) in projection, commissural and
    association pathways of individuals with ADHD, which were associated with symptom severity and cognitive deficits. The meta-
    analysis showed a consistent reduced FA in the splenium and body of the corpus callosum, extending to the cingulum. Lower FA
    was related to older age, and case-control differences did not survive in the pediatric meta-analysis. About 68% of studies were of
    low quality, mainly due to acquisitions with non-isotropic voxels or lack of motion correction; and the sensitivity analysis in high-
    quality datasets yielded no significant results. Findings suggest prominent alterations in posterior interhemispheric connections
    subserving cognitive and motor functions affected in ADHD, although these might be influenced by non-optimal acquisition
    parameters/preprocessing. Absence of findings in children may be related to the late development of callosal fibers, which may
    enhance case-control differences in adulthood. Clinicodemographic and methodological differences were major barriers to
    consistency and comparability among studies, and should be addressed in future investigations.
  • Passmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G. and 4 morePassmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G., Ross, R. M., Thomas-Colquhoun, E., Evans, N., & Jordan, F. M. (2023). Kinbank: A global database of kinship terminology. PLOS ONE, 18: e0283218. doi:10.1371/journal.pone.0283218.

    Abstract

    For a single species, human kinship organization is both remarkably diverse and strikingly organized. Kinship terminology is the structured vocabulary used to classify, refer to, and address relatives and family. Diversity in kinship terminology has been analyzed by anthropologists for over 150 years, although recurrent patterning across cultures remains incompletely explained. Despite the wealth of kinship data in the anthropological record, comparative studies of kinship terminology are hindered by data accessibility. Here we present Kinbank, a new database of 210,903 kinterms from a global sample of 1,229 spoken languages. Using open-access and transparent data provenance, Kinbank offers an extensible resource for kinship terminology, enabling researchers to explore the rich diversity of human family organization and to test longstanding hypotheses about the origins and drivers of recurrent patterns. We illustrate our contribution with two examples. We demonstrate strong gender bias in the phonological structure of parent terms across 1,022 languages, and we show that there is no evidence for a coevolutionary relationship between cross-cousin marriage and bifurcate-merging terminology in Bantu languages. Analysing kinship data is notoriously challenging; Kinbank aims to eliminate data accessibility issues from that challenge and provide a platform to build an interdisciplinary understanding of kinship.

    Additional information

    Supporting Information
  • Paulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M. and 5 morePaulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M., Zoonomia Consortium, Rojas, D., Dávalos, L. M., Lindblad-Toh, K., Karlsson, E. K., & Ray, D. A. (2023). Chiropterans are a hotspot for horizontal transfer of DNA transposons in mammalia. Molecular Biology and Evolution, 40(5): msad092. doi:10.1093/molbev/msad092.

    Abstract

    Horizontal transfer of transposable elements (TEs) is an important mechanism contributing to genetic diversity and innovation. Bats (order Chiroptera) have repeatedly been shown to experience horizontal transfer of TEs at what appears to be a high rate compared with other mammals. We investigated the occurrence of horizontally transferred (HT) DNA transposons involving bats. We found over 200 putative HT elements within bats; 16 transposons were shared across distantly related mammalian clades, and 2 other elements were shared with a fish and two lizard species. Our results indicate that bats are a hotspot for horizontal transfer of DNA transposons. These events broadly coincide with the diversification of several bat clades, supporting the hypothesis that DNA transposon invasions have contributed to genetic diversification of bats.

    Additional information

    supplemental methods supplemental tables
  • Pederson, E., Danziger, E., Wilkins, D. G., Levinson, S. C., Kita, S., & Senft, G. (1998). Semantic typology and spatial conceptualization. Language, 74(3), 557-589. doi:10.2307/417793.
  • Pender, R., Fearon, P., St Pourcain, B., Heron, J., & Mandy, W. (2023). Developmental trajectories of autistic social traits in the general population. Psychological Medicine, 53(3), 814-822. doi:10.1017/S0033291721002166.

    Abstract

    Background

    Autistic people show diverse trajectories of autistic traits over time, a phenomenon labelled ‘chronogeneity’. For example, some show a decrease in symptoms, whilst others experience an intensification of difficulties. Autism spectrum disorder (ASD) is a dimensional condition, representing one end of a trait continuum that extends throughout the population. To date, no studies have investigated chronogeneity across the full range of autistic traits. We investigated the nature and clinical significance of autism trait chronogeneity in a large, general population sample.
    Methods

    Autistic social/communication traits (ASTs) were measured in the Avon Longitudinal Study of Parents and Children using the Social and Communication Disorders Checklist (SCDC) at ages 7, 10, 13 and 16 (N = 9744). We used Growth Mixture Modelling (GMM) to identify groups defined by their AST trajectories. Measures of ASD diagnosis, sex, IQ and mental health (internalising and externalising) were used to investigate external validity of the derived trajectory groups.
    Results

    The selected GMM model identified four AST trajectory groups: (i) Persistent High (2.3% of sample), (ii) Persistent Low (83.5%), (iii) Increasing (7.3%) and (iv) Decreasing (6.9%) trajectories. The Increasing group, in which females were a slight majority (53.2%), showed dramatic increases in SCDC scores during adolescence, accompanied by escalating internalising and externalising difficulties. Two-thirds (63.6%) of the Decreasing group were male.
    Conclusions

    Clinicians should note that for some young people autism-trait-like social difficulties first emerge during adolescence accompanied by problems with mood, anxiety, conduct and attention. A converse, majority-male group shows decreasing social difficulties during adolescence.
  • Perdue, C., & Klein, W. (1992). Why does the production of some learners not grammaticalize? Studies in Second Language Acquisition, 14, 259-272. doi:10.1017/S0272263100011116.

    Abstract

    In this paper we follow two beginning learners of English, Andrea and Santo, over a period of 2 years as they develop means to structure the declarative utterances they produce in various production tasks, and then we look at the following problem: In the early stages of acquisition, both learners develop a common learner variety; during these stages, we see a picture of two learner varieties developing similar regularities determined by the minimal requirements of the tasks we examine. Andrea subsequently develops further morphosyntactic means to achieve greater cohesion in his discourse. But Santo does not. Although we can identify contexts where the grammaticalization of Andrea's production allows him to go beyond the initial constraints of his variety, it is much more difficult to ascertain why Santo, faced with the same constraints in the same contexts, does not follow this path. Some lines of investigation into this problem are then suggested.
  • Petersson, K. M. (1998). Comments on a Monte Carlo approach to the analysis of functional neuroimaging data. NeuroImage, 8, 108-112.
  • Petrovic, P., Kalso, E., Petersson, K. M., & Ingvar, M. (2002). Placebo and opioid analgesia - Imaging a shared neuronal network. Science, 295(5560), 1737-1740. doi:10.1126/science.1067176.

    Abstract

    It has been suggested that placebo analgesia involves both higher order cognitive networks and endogenous opioid systems. The rostral anterior cingulate cortex (rACC) and the brainstem are implicated in opioid analgesia, suggesting a similar role for these structures in placebo analgesia. Using positron emission tomography, we confirmed that both opioid and placebo analgesia are associated with increased activity in the rACC. We also observed a covariation between the activity in the rACC and the brainstem during both opioid and placebo analgesia, but not during the pain-only condition. These findings indicate a related neural mechanism in placebo and opioid analgesia.
  • Petrovic, P., Kalso, E., Petersson, K. M., & Ingvar, M. (2002). Placebo and opioid analgesia - Imaging a shared neuronal network. Science, 295(5560), 1737-1740. doi:10.1126/science.1067176.

    Abstract

    It has been suggested that placebo analgesia involves both higher order cognitive networks and endogenous opioid systems. The rostral anterior cingulate cortex (rACC) and the brainstem are implicated in opioid analgesia, suggesting a similar role for these structures in placebo analgesia. Using positron emission tomography, we confirmed that both opioid and placebo analgesia are associated with increased activity in the rACC. We also observed a covariation between the activity in the rACC and the brainstem during both opioid and placebo analgesia, but not during the pain-only condition. These findings indicate a related neural mechanism in placebo and opioid analgesia.
  • Petrovic, P., Petersson, K. M., Hansson, P., & Ingvar, M. (2002). A regression analysis study of the primary somatosensory cortex during pain. NeuroImage, 16(4), 1142-1150. doi:10.1006/nimg.2002.1069.

    Abstract

    Several functional imaging studies of pain, using a number of different experimental paradigms and a variety of reference states, have failed to detect activations in the somatosensory cortices, while other imaging studies of pain have reported significant activations in these regions. The role of the somatosensory areas in pain processing has therefore been debated. In the present study the left hand was immersed in painfully cold water (standard cold pressor test) and in nonpainfully cold water during 2 min, and PET-scans were obtained either during the first or the second minute of stimulation. We observed no significant increase of activity in the somatosensory regions when the painful conditions were directly compared with the control conditions. In order to better understand the role of the primary somatosensory cortex (S1) in pain processing we used a regression analysis to study the relation between a ROI (region of interest) in the somatotopic S1-area for the stimulated hand and other regions known to be involved in pain processing. We hypothesized that although no increased activity was observed in the S1 during pain, this region would change its covariation pattern during noxious input as compared to the control stimulation if it is involved in or affected by the processing of pain. In the nonpainful cold conditions widespread regions of the ipsilateral and contralateral somatosensory cortex showed a positive covariation with the activity in the S1-ROI. However, during the first and second minute of pain this regression was significantly attenuated. During the second minute of painful stimulation there was a significant positive covariation between the activity in the S1-ROI and the other regions that are known to be involved in pain processing. Importantly, this relation was significantly stronger for the insula and the orbitofrontal cortex bilaterally when compared to the nonpainful state. The results indicate that the S1-cortex may be engaged in or affected by the processing of pain although no differential activity is observed when pain is compared with the reference condition.
  • Piai, V., & Eikelboom, D. (2023). Brain areas critical for picture naming: A systematic review and meta-analysis of lesion-symptom mapping studies. Neurobiology of Language, 4(2), 280-296. doi:10.1162/nol_a_00097.

    Abstract

    Lesion-symptom mapping (LSM) studies have revealed brain areas critical for naming, typically finding significant associations between damage to left temporal, inferior parietal, and inferior fontal regions and impoverished naming performance. However, specific subregions found in the available literature vary. Hence, the aim of this study was to perform a systematic review and meta-analysis of published lesion-based findings, obtained from studies with unique cohorts investigating brain areas critical for accuracy in naming in stroke patients at least 1 month post-onset. An anatomic likelihood estimation (ALE) meta-analysis of these LSM studies was performed. Ten papers entered the ALE meta-analysis, with similar lesion coverage over left temporal and left inferior frontal areas. This small number is a major limitation of the present study. Clusters were found in left anterior temporal lobe, posterior temporal lobe extending into inferior parietal areas, in line with the arcuate fasciculus, and in pre- and postcentral gyri and middle frontal gyrus. No clusters were found in left inferior frontal gyrus. These results were further substantiated by examining five naming studies that investigated performance beyond global accuracy, corroborating the ALE meta-analysis results. The present review and meta-analysis highlight the involvement of left temporal and inferior parietal cortices in naming, and of mid to posterior portions of the temporal lobe in particular in conceptual-lexical retrieval for speaking.

    Additional information

    data

Share this page