Publications

Displaying 301 - 400 of 610
  • Lammertink, I., Casillas, M., Benders, T., Post, B., & Fikkert, P. (2015). Dutch and English toddlers' use of linguistic cues in predicting upcoming turn transitions. Frontiers in Psychology, 6: 495. doi:10.3389/fpsyg.2015.00495.
  • Lartseva, A., Dijkstra, T., & Buitelaar, J. (2015). Emotional language processing in Autism Spectrum Disorders: A systematic review. Frontiers in Human Neuroscience, 8: 991. doi:10.3389/fnhum.2014.00991.

    Abstract

    In his first description of Autism Spectrum Disorders (ASD), Kanner emphasized emotional impairments by characterizing children with ASD as indifferent to other people, self-absorbed, emotionally cold, distanced, and retracted. Thereafter, emotional impairments became regarded as part of the social impairments of ASD, and research mostly focused on understanding how individuals with ASD recognize visual expressions of emotions from faces and body postures. However, it still remains unclear how emotions are processed outside of the visual domain. This systematic review aims to fill this gap by focusing on impairments of emotional language processing in ASD.
    We systematically searched PubMed for papers published between 1990 and 2013 using standardized search terms. Studies show that people with ASD are able to correctly classify emotional language stimuli as emotionally positive or negative. However, processing of emotional language stimuli in ASD is associated with atypical patterns of attention and memory performance, as well as abnormal physiological and neural activity. Particularly, younger children with ASD have difficulties in acquiring and developing emotional concepts, and avoid using these in discourse. These emotional language impairments were not consistently associated with age, IQ, or level of development of language skills.
    We discuss how emotional language impairments fit with existing cognitive theories of ASD, such as central coherence, executive dysfunction, and weak Theory of Mind. We conclude that emotional impairments in ASD may be broader than just a mere consequence of social impairments, and should receive more attention in future research
  • Lausberg, H., & Kita, S. (2002). Dissociation of right and left gesture spaces in split-brain patients. Cortex, 38(5), 883-886. doi:10.1016/S0010-9452(08)70062-5.

    Abstract

    The present study investigates hemispheric specialisation in the use of space in communicative gestures. For this purpose, we investigate split-brain patients in whom spontaneous and distinct right hand gestures can only be controlled by the left hemisphere and vice versa, the left hand only by the right hemisphere. On this anatomical basis, we can infer hemispheric specialisation from the performances of the right and left hands. In contrast to left hand dyspraxia in tasks that require language processing, split-brain patients utilise their left hands in a meaningful way in visuo-constructive tasks such as copying drawings or block-design. Therefore, we conjecture that split-brain patients are capable of using their left hands for the communication of the content of visuo-spatial animations via gestural demonstration. On this basis, we further examine the use of space in communicative gestures by the right and left hands. McNeill and Pedelty (1995) noted for the split-brain patient N.G. that her iconic right hand gestures were exclusively displayed in the right personal space. The present study investigates systematically if there is indication for neglect of the left personal space in right hand gestures in split-brain patients.
  • Lausberg, H., & Kita, S. (2002). Dissociation of right and left hand gesture spaces in split-brain patients. Cortex, 38(5), 883-886. doi:10.1016/S0010-9452(08)70062-5.

    Abstract

    The present study investigates hemispheric specialisation in the use of space in communicative gestures. For this purpose, we investigate split-brain patients in whom spontaneous and distinct right hand gestures can only be controlled by the left hemisphere and vice versa, the left hand only by the right hemisphere. On this anatomical basis, we can infer hemispheric specialisation from the performances of the right and left hands. In contrast to left hand dyspraxia in tasks that require language processing, split-brain patients utilise their left hands in a meaningful way in visuo-constructive tasks such as copying drawings or block-design. Therefore, we conjecture that split-brain patients are capable of using their left hands for the communication of the content of visuo-spatial animations via gestural demonstration. On this basis, we further examine the use of space in communicative gestures by the right and left hands. McNeill and Pedelty (1995) noted for the split-brain patient N.G. that her iconic right hand gestures were exclusively displayed in the right personal space. The present study investigates systematically if there is indication for neglect of the left personal space in right hand gestures in split-brain patients.
  • Lee, S. A., Ferrari, A., Vallortigara, G., & Sovrano, V. A. (2015). Boundary primacy in spatial mapping: Evidence from zebrafish (Danio rerio). Behavioural Processes, 119, 116-122. doi:10.1016/j.beproc.2015.07.012.

    Abstract

    The ability to map locations in the surrounding environment is crucial for any navigating animal. Decades of research on mammalian spatial representations suggest that environmental boundaries play a major role in both navigation behavior and hippocampal place coding. Although the capacity for spatial mapping is shared among vertebrates, including birds and fish, it is not yet clear whether such similarities in competence reflect common underlying mechanisms. The present study tests cue specificity in spatial mapping in zebrafish, by probing their use of various visual cues to encode the location of a nearby conspecific. The results suggest that untrained zebrafish, like other vertebrates tested so far, rely primarily on environmental boundaries to compute spatial relationships and, at the same time, use other visible features such as surface markings and freestanding objects as local cues to goal locations. We propose that the pattern of specificity in spontaneous spatial mapping behavior across vertebrates reveals cross-species commonalities in its underlying neural representations.
  • Leitner, C., D’Este, G., Verga, L., Rahayel, S., Mombelli, S., Sforza, M., Casoni, F., Zucconi, M., Ferini-Strambi, L., & Galbiati, A. (2024). Neuropsychological changes in isolated REM sleep behavior disorder: A systematic review and meta-analysis of cross-sectional and longitudinal studies. Neuropsychology Review, 34(1), 41-66. doi:10.1007/s11065-022-09572-1.

    Abstract

    The aim of this meta-analysis is twofold: (a) to assess cognitive impairments in isolated rapid eye movement (REM) sleep behavior disorder (iRBD) patients compared to healthy controls (HC); (b) to quantitatively estimate the risk of developing a neurodegenerative disease in iRBD patients according to baseline cognitive assessment. To address the first aim, cross-sectional studies including polysomnography-confirmed iRBD patients, HC, and reporting neuropsychological testing were included. To address the second aim, longitudinal studies including polysomnography-confirmed iRBD patients, reporting baseline neuropsychological testing for converted and still isolated patients separately were included. The literature search was conducted based on PRISMA guidelines and the protocol was registered at PROSPERO (CRD42021253427). Cross-sectional and longitudinal studies were searched from PubMed, Web of Science, Scopus, and Embase databases. Publication bias and statistical heterogeneity were assessed respectively by funnel plot asymmetry and using I2. Finally, a random-effect model was performed to pool the included studies. 75 cross-sectional (2,398 HC and 2,460 iRBD patients) and 11 longitudinal (495 iRBD patients) studies were selected. Cross-sectional studies showed that iRBD patients performed significantly worse in cognitive screening scores (random-effects (RE) model = –0.69), memory (RE model = –0.64), and executive function (RE model = –0.50) domains compared to HC. The survival analyses conducted for longitudinal studies revealed that lower executive function and language performance, as well as the presence of mild cognitive impairment (MCI), at baseline were associated with an increased risk of conversion at follow-up. Our study underlines the importance of a comprehensive neuropsychological assessment in the context of iRBD.

    Additional information

    figure 1 tables
  • De León, L., & Levinson, S. C. (Eds.). (1992). Space in Mesoamerican languages [Special Issue]. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 45(6).
  • Leonetti, S., Cimarelli, G., Hersh, T. A., & Ravignani, A. (2024). Why do dogs wag their tails? Biology Letters, 20(1): 20230407. doi:10.1098/rsbl.2023.0407.

    Abstract

    Tail wagging is a conspicuous behaviour in domestic dogs (Canis familiaris). Despite how much meaning humans attribute to this display, its quantitative description and evolutionary history are rarely studied. We summarize what is known about the mechanism, ontogeny, function and evolution of this behaviour. We suggest two hypotheses to explain its increased occurrence and frequency in dogs compared to other canids. During the domestication process, enhanced rhythmic tail wagging behaviour could have (i) arisen as a by-product of selection for other traits, such as docility and tameness, or (ii) been directly selected by humans, due to our proclivity for rhythmic stimuli. We invite testing of these hypotheses through neurobiological and ethological experiments, which will shed light on one of the most readily observed yet understudied animal behaviours. Targeted tail wagging research can be a window into both canine ethology and the evolutionary history of characteristic human traits, such as our ability to perceive and produce rhythmic behaviours.
  • Lev-Ari, S. (2015). Comprehending non-native speakers: Theory and evidence for adjustment in manner of processing. Frontiers in Psychology, 5: 1546. doi:10.3389/fpsyg.2014.01546.

    Abstract

    Non-native speakers have lower linguistic competence than native speakers, which renders their language less reliable in conveying their intentions. We suggest that expectations of lower competence lead listeners to adapt their manner of processing when they listen to non-native speakers. We propose that listeners use cognitive resources to adjust by increasing their reliance on top-down processes and extracting less information from the language of the non-native speaker. An eye-tracking study supports our proposal by showing that when following instructions by a non-native speaker, listeners make more contextually-induced interpretations. Those with relatively high working memory also increase their reliance on context to anticipate the speaker’s upcoming reference, and are less likely to notice lexical errors in the non-native speech, indicating that they take less information from the speaker’s language. These results contribute to our understanding of the flexibility in language processing and have implications for interactions between native and non-native speakers

    Additional information

    Data Sheet 1.docx
  • Levelt, W. J. M. (2002). Picture naming and word frequency: Comments on Alario, Costa and Caramazza, Language and Cognitive Processes, 17(3), 299-319. Language and Cognitive Processes, 17(6), 663-671. doi:0.1080/01690960143000443.

    Abstract

    This commentary on Alario et al. (2002) addresses two issues: (1) Different from what the authors suggest, there are no theories of production claiming the phonological word to be the upper bound of advance planning before the onset of articulation; (2) Their picture naming study of word frequency effects on speech onset is inconclusive by lack of a crucial control, viz., of object recognition latency. This is a perennial problem in picture naming studies of word frequency and age of acquisition effects
  • Levelt, W. J. M. (1992). Accessing words in speech production: Stages, processes and representations. Cognition, 42, 1-22. doi:10.1016/0010-0277(92)90038-J.

    Abstract

    This paper introduces a special issue of Cognition on lexical access in speech production. Over the last quarter century, the psycholinguistic study of speaking, and in particular of accessing words in speech, received a major new impetus from the analysis of speech errors, dysfluencies and hesitations, from aphasiology, and from new paradigms in reaction time research. The emerging theoretical picture partitions the accessing process into two subprocesses, the selection of an appropriate lexical item (a “lemma”) from the mental lexicon, and the phonological encoding of that item, that is, the computation of a phonetic program for the item in the context of utterance. These two theoretical domains are successively introduced by outlining some core issues that have been or still have to be addressed. The final section discusses the controversial question whether phonological encoding can affect lexical selection. This partitioning is also followed in this special issue as a whole. There are, first, four papers on lexical selection, then three papers on phonological encoding, and finally one on the interaction between selection and phonological encoding.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M. (1992). Fairness in reviewing: A reply to O'Connell. Journal of Psycholinguistic Research, 21, 401-403.
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M. (1967). Note on the distribution of dominance times in binocular rivalry. British Journal of Psychology, 58, 143-145.
  • Levelt, W. J. M. (1992). Sprachliche Musterbildung und Mustererkennung. Nova Acta Leopoldina NF, 67(281), 357-370.
  • Levelt, W. J. M. (1992). The perceptual loop theory not disconfirmed: A reply to MacKay. Consciousness and Cognition, 1, 226-230. doi:10.1016/1053-8100(92)90062-F.

    Abstract

    In his paper, MacKay reviews his Node Structure theory of error detection, but precedes it with a critical discussion of the Perceptual Loop theory of self-monitoring proposed in Levelt (1983, 1989). The present commentary is concerned with this latter critique and shows that there are more than casual problems with MacKay’s argumentation.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levinson, S. C., Kita, S., Haun, D. B. M., & Rasch, B. H. (2002). Returning the tables: Language affects spatial reasoning. Cognition, 84(2), 155-188. doi:10.1016/S0010-0277(02)00045-8.

    Abstract

    Li and Gleitman (Turning the tables: language and spatial reasoning. Cognition, in press) seek to undermine a large-scale cross-cultural comparison of spatial language and cognition which claims to have demonstrated that language and conceptual coding in the spatial domain covary (see, for example, Space in language and cognition: explorations in linguistic diversity. Cambridge: Cambridge University Press, in press; Language 74 (1998) 557): the most plausible interpretation is that different languages induce distinct conceptual codings. Arguing against this, Li and Gleitman attempt to show that in an American student population they can obtain any of the relevant conceptual codings just by varying spatial cues, holding language constant. They then argue that our findings are better interpreted in terms of ecologically-induced distinct cognitive styles reflected in language. Linguistic coding, they argue, has no causal effects on non-linguistic thinking – it simply reflects antecedently existing conceptual distinctions. We here show that Li and Gleitman did not make a crucial distinction between frames of spatial reference relevant to our line of research. We report a series of experiments designed to show that they have, as a consequence, misinterpreted the results of their own experiments, which are in fact in line with our hypothesis. Their attempts to reinterpret the large cross-cultural study, and to enlist support from animal and infant studies, fail for the same reasons. We further try to discern exactly what theory drives their presumption that language can have no cognitive efficacy, and conclude that their position is undermined by a wide range of considerations.
  • Levinson, S. C. (2002). Time for a linguistic anthropology of time. Current Anthropology, 43(4), S122-S123. doi:10.1086/342214.
  • Levinson, S. C. (2015). John Joseph Gumperz (1922–2013) [Obituary]. American Anthropologist, 117(1), 212-224. doi:10.1111/aman.12185.
  • Levinson, S. C. (2015). Other-initiated repair in Yélî Dnye: Seeing eye-to-eye in the language of Rossel Island. Open Linguistics, 1(1), 386-410. doi:10.1515/opli-2015-0009.

    Abstract

    Other-initiated repair (OIR) is the fundamental back-up system that ensures the effectiveness of human communication in its primordial niche, conversation. This article describes the interactional and linguistic patterns involved in other-initiated repair in Yélî Dnye, the Papuan language of Rossel Island, Papua New Guinea. The structure of the article is based on the conceptual set of distinctions described in Chapters 1 and 2 of the special issue, and describes the major properties of the Rossel Island system, and the ways in which OIR in this language both conforms to familiar European patterns and deviates from those patterns. Rossel Island specialities include lack of a Wh-word open class repair initiator, and a heavy reliance on visual signals that makes it possible both to initiate repair and confirm it non-verbally. But the overall system conforms to universal expectations.
  • Levinson, S. C. (1992). Primer for the field investigation of spatial description and conception. Pragmatics, 2(1), 5-47.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C., & Torreira, F. (2015). Timing in turn-taking and its implications for processing models of language. Frontiers in Psychology, 6: 731. doi:10.3389/fpsyg.2015.00731.

    Abstract

    The core niche for language use is in verbal interaction, involving the rapid exchange of turns at talking. This paper reviews the extensive literature about this system, adding new statistical analyses of behavioural data where they have been missing, demonstrating that turn-taking has the systematic properties originally noted by Sacks, Schegloff and Jefferson (1974; hereafter SSJ). This system poses some significant puzzles for current theories of language processing: the gaps between turns are short (of the order of 200 ms), but the latencies involved in language production are much longer (over 600 ms). This seems to imply that participants in conversation must predict (or ‘project’ as SSJ have it) the end of the current speaker’s turn in order to prepare their response in advance. This in turn implies some overlap between production and comprehension despite their use of common processing resources. Collecting together what is known behaviourally and experimentally about the system, the space for systematic explanations of language processing for conversation can be significantly narrowed, and we sketch some first model of the mental processes involved for the participant preparing to speak next.
  • Levshina, N., Koptjevskaja-Tamm, M., & Östling, R. (2024). Revered and reviled: A sentiment analysis of female and male referents in three languages. Frontiers in Communication, 9: 1266407. doi:10.3389/fcomm.2024.1266407.

    Abstract

    Our study contributes to the less explored domain of lexical typology, focusing on semantic prosody and connotation. Semantic derogation, or pejoration of nouns referring to women, whereby such words acquire connotations and further denotations of social pejoration, immorality and/or loose sexuality, has been a very prominent question in studies on gender and language (change). It has been argued that pejoration emerges due to the general derogatory attitudes toward female referents. However, the evidence for systematic differences in connotations of female- vs. male-related words is fragmentary and often fairly impressionistic; moreover, many researchers argue that expressed sentiments toward women (as well as men) often are ambivalent. One should also expect gender differences in connotations to have decreased in the recent years, thanks to the advances of feminism and social progress. We test these ideas in a study of positive and negative connotations of feminine and masculine term pairs such as woman - man, girl - boy, wife - husband, etc. Sentences containing these words were sampled from diachronic corpora of English, Chinese and Russian, and sentiment scores for every word were obtained using two systems for Aspect-Based Sentiment Analysis: PyABSA, and OpenAI’s large language model GPT-3.5. The Generalized Linear Mixed Models of our data provide no indications of significantly more negative sentiment toward female referents in comparison with their male counterparts. However, some of the models suggest that female referents are more infrequently associated with neutral sentiment than male ones. Neither do our data support the hypothesis of the diachronic convergence between the genders. In sum, results suggest that pejoration is unlikely to be explained simply by negative attitudes to female referents in general.

    Additional information

    supplementary materials
  • Lewis, A. G., & Bastiaansen, M. C. M. (2015). A predictive coding framework for rapid neural dynamics during sentence-level language comprehension. Cortex, 68, 155-168. doi:10.1016/j.cortex.2015.02.014.

    Abstract

    There is a growing literature investigating the relationship between oscillatory neural dynamics measured using EEG and/or MEG, and sentence-level language comprehension. Recent proposals have suggested a strong link between predictive coding accounts of the hierarchical flow of information in the brain, and oscillatory neural dynamics in the beta and gamma frequency ranges. We propose that findings relating beta and gamma oscillations to sentence-level language comprehension might be unified under such a predictive coding account. Our suggestion is that oscillatory activity in the beta frequency range may reflect both the active maintenance of the current network configuration responsible for representing the sentence-level meaning under construction, and the top-down propagation of predictions to hierarchically lower processing levels based on that representation. In addition, we suggest that oscillatory activity in the low and middle gamma range reflect the matching of top-down predictions with bottom-up linguistic input, while evoked high gamma might reflect the propagation of bottom-up prediction errors to higher levels of the processing hierarchy. We also discuss some of the implications of this predictive coding framework, and we outline ideas for how these might be tested experimentally
  • Lewis, A. G., Wang, L., & Bastiaansen, M. C. M. (2015). Fast oscillatory dynamics during language comprehension: Unification versus maintenance and prediction? Brain and Language, 148, 51-63. doi:10.1016/j.bandl.2015.01.003.

    Abstract

    The role of neuronal oscillations during language comprehension is not yet well understood. In this paper we review and reinterpret the functional roles of beta- and gamma-band oscillatory activity during language comprehension at the sentence and discourse level. We discuss the evidence in favor of a role for beta and gamma in unification (the unification hypothesis), and in light of mounting evidence that cannot be accounted for under this hypothesis, we explore an alternative proposal linking beta and gamma oscillations to maintenance and prediction (respectively) during language comprehension. Our maintenance/prediction hypothesis is able to account for most of the findings that are currently available relating beta and gamma oscillations to language comprehension, and is in good agreement with other proposals about the roles of beta and gamma in domain-general cognitive processing. In conclusion we discuss proposals for further testing and comparing the prediction and unification hypotheses.
  • Lima, C. F., Lavan, N., Evans, S., Agnew, Z., Halpern, A. R., Shanmugalingam, P., Meekings, S., Boebinger, D., Ostarek, M., McGettigan, C., Warren, J. E., & Scott, S. K. (2015). Feel the Noise: Relating individual differences in auditory imagery to the structure and function of sensorimotor systems. Cerebral Cortex., 2015(25), 4638-4650. doi:10.1093/cercor/bhv134.

    Abstract

    Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual–motor interactions for processing heard and internally generated auditory information.
  • Liszkowski, U., & Ramenzoni, V. C. (2015). Pointing to nothing? Empty places prime infants' attention to absent objects. Infancy, 20, 433-444. doi:10.1111/infa.12080.

    Abstract

    People routinely point to empty space when referring to absent entities. These points to "nothing" are meaningful because they direct attention to places that stand in for specific entities. Typically, the meaning of places in terms of absent referents is established through preceding discourse and accompanying language. However, it is unknown whether nonlinguistic actions can establish locations as meaningful places, and whether infants have the capacity to represent a place as standing in for an object. In a novel eye-tracking paradigm, 18-month-olds watched objects being placed in specific locations. Then, the objects disappeared and a point directed infants' attention to an emptied place. The point to the empty place primed infants in a subsequent scene (in which the objects appeared at novel locations) to look more to the object belonging to the indicated place than to a distracter referent. The place-object expectations were strong enough to interfere when reversing the place-object associations. Findings show that infants comprehend nonlinguistic reference to absent entities, which reveals an ontogenetic early, nonverbal understanding of places as representations of absent objects
  • Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: A review of behavioural, developmental, and neuroimaging research into sound-symbolism. Frontiers in Psychology, 6: 1246. doi:10.3389/fpsyg.2015.01246.

    Abstract

    This review covers experimental approaches to sound-symbolism—from infants to adults, and from Sapir’s foundational studies to twenty-first century product naming. It synthesizes recent behavioral, developmental, and neuroimaging work into a systematic overview of the cross-modal correspondences that underpin iconic links between form and meaning. It also identifies open questions and opportunities, showing how the future course of experimental iconicity research can benefit from an integrated interdisciplinary perspective. Combining insights from psychology and neuroscience with evidence from natural languages provides us with opportunities for the experimental investigation of the role of sound-symbolism in language learning, language processing, and communication. The review finishes by describing how hypothesis-testing and model-building will help contribute to a cumulative science of sound-symbolism in human language.
  • Lockwood, G., & Tuomainen, J. (2015). Ideophones in Japanese modulate the P2 and late positive complex responses. Frontiers in Psychology, 6: 933. doi:10.3389/fpsyg.2015.00933.

    Abstract

    Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response and a sustained late positive complex in comparison to arbitrary adverbs. These results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of ideophones in comparison to arbitrary words. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds, and that these effects are detectable in natural language.
  • Loke*, J., Seijdel*, N., Snoek, L., Sorensen, L., Van de Klundert, R., Van der Meer, M., Quispel, E., Cappaert, N., & Scholte, H. S. (2024). Human visual cortex and deep convolutional neural network care deeply about object background. Journal of Cognitive Neuroscience, 36(3), 551-566. doi:10.1162/jocn_a_02098.

    Abstract

    * These authors contributed equally/shared first author
    Deep convolutional neural networks (DCNNs) are able to partially predict brain activity during object categorization tasks, but factors contributing to this predictive power are not fully understood. Our study aimed to investigate the factors contributing to the predictive power of DCNNs in object categorization tasks. We compared the activity of four DCNN architectures with EEG recordings obtained from 62 human participants during an object categorization task. Previous physiological studies on object categorization have highlighted the importance of figure-ground segregation—the ability to distinguish objects from their backgrounds. Therefore, we investigated whether figure-ground segregation could explain the predictive power of DCNNs. Using a stimulus set consisting of identical target objects embedded in different backgrounds, we examined the influence of object background versus object category within both EEG and DCNN activity. Crucially, the recombination of naturalistic objects and experimentally controlled backgrounds creates a challenging and naturalistic task, while retaining experimental control. Our results showed that early EEG activity (< 100 msec) and early DCNN layers represent object background rather than object category. We also found that the ability of DCNNs to predict EEG activity is primarily influenced by how both systems process object backgrounds, rather than object categories. We demonstrated the role of figure-ground segregation as a potential prerequisite for recognition of object features, by contrasting the activations of trained and untrained (i.e., random weights) DCNNs. These findings suggest that both human visual cortex and DCNNs prioritize the segregation of object backgrounds and target objects to perform object categorization. Altogether, our study provides new insights into the mechanisms underlying object categorization as we demonstrated that both human visual cortex and DCNNs care deeply about object background.

    Additional information

    link to preprint
  • Long, M., Rohde, H., Oraa Ali, M., & Rubio-Fernandez, P. (2024). The role of cognitive control and referential complexity on adults’ choice of referring expressions: Testing and expanding the referential complexity scale. Journal of Experimental Psychology: Learning, Memory, and Cognition, 50(1), 109-136. doi:10.1037/xlm0001273.

    Abstract

    This study aims to advance our understanding of the nature and source(s) of individual differences in pragmatic language behavior over the adult lifespan. Across four story continuation experiments, we probed adults’ (N = 496 participants, ages 18–82) choice of referential forms (i.e., names vs. pronouns to refer to the main character). Our manipulations were based on Fossard et al.’s (2018) scale of referential complexity which varies according to the visual properties of the scene: low complexity (one character), intermediate complexity (two characters of different genders), and high complexity (two characters of the same gender). Since pronouns signal topic continuity (i.e., that the discourse will continue to be about the same referent), the use of pronouns is expected to decrease as referential complexity increases. The choice of names versus pronouns, therefore, provides insight into participants’ perception of the topicality of a referent, and whether that varies by age and cognitive capacity. In Experiment 1, we used the scale to test the association between referential choice, aging, and cognition, identifying a link between older adults’ switching skills and optimal referential choice. In Experiments 2–4, we tested novel manipulations that could impact the scale and found both the timing of a competitor referent’s presence and emphasis placed on competitors modulated referential choice, leading us to refine the scale for future use. Collectively, Experiments 1–4 highlight what type of contextual information is prioritized at different ages, revealing older adults’ preserved sensitivity to (visual) scene complexity but reduced sensitivity to linguistic prominence cues, compared to younger adults.
  • Long, M., MacPherson, S. E., & Rubio-Fernandez, P. (2024). Prosocial speech acts: Links to pragmatics and aging. Developmental Psychology, 60(3), 491-504. doi:10.1037/dev0001725.

    Abstract

    This study investigated how adults over the lifespan flexibly adapt their use of prosocial speech acts when conveying bad news to communicative partners. Experiment 1a (N = 100 Scottish adults aged 18–72 years) assessed whether participants’ use of prosocial speech acts varied according to audience design considerations (i.e., whether or not the recipient of the news was directly affected). Experiment 1b (N = 100 Scottish adults aged 19–70 years) assessed whether participants adjusted for whether the bad news was more or less severe (an index of general knowledge). Younger adults displayed more flexible adaptation to the recipient manipulation, while no age differences were found for severity. These findings are consistent with prior work showing age-related decline in audience design but not in the use of general knowledge during language production. Experiment 2 further probed younger adults (N = 40, Scottish, aged 18–37 years) and older adults’ (N = 40, Scottish, aged 70–89 years) prosocial linguistic behavior by investigating whether health (vs. nonhealth-related) matters would affect responses. While older adults used prosocial speech acts to a greater extent than younger adults, they did not distinguish between conditions. Our results suggest that prosocial linguistic behavior is likely influenced by a combination of differences in audience design and communicative styles at different ages. Collectively, these findings highlight the importance of situating prosocial speech acts within the pragmatics and aging literature, allowing us to uncover the factors modulating prosocial linguistic behavior at different developmental stages.

    Additional information

    figures
  • Love, B. C., Kopeć, Ł., & Guest, O. (2015). Optimism bias in fans and sports reporters. PLoS One, 10(9): e0137685. doi:10.1371/journal.pone.0137685.

    Abstract

    People are optimistic about their prospects relative to others. However, existing studies can be difficult to interpret because outcomes are not zero-sum. For example, one person avoiding cancer does not necessitate that another person develops cancer. Ideally, optimism bias would be evaluated within a closed formal system to establish with certainty the extent of the bias and the associated environmental factors, such that optimism bias is demonstrated when a population is internally inconsistent. Accordingly, we asked NFL fans to predict how many games teams they liked and disliked would win in the 2015 season. Fans, like ESPN reporters assigned to cover a team, were overly optimistic about their team’s prospects. The opposite pattern was found for teams that fans disliked. Optimism may flourish because year-to-year team results are marked by auto-correlation and regression to the group mean (i.e., good teams stay good, but bad teams improve).

    Additional information

    raw data
  • Lozano, R., Vino, A., Lozano, C., Fisher, S. E., & Deriziotis, P. (2015). A de novo FOXP1 variant in a patient with autism, intellectual disability and severe speech and language impairment. European Journal of Human Genetics, 23, 1702-1707. doi:10.1038/ejhg.2015.66.

    Abstract

    FOXP1 (forkhead box protein P1) is a transcription factor involved in the development of several tissues, including the brain. An emerging phenotype of patients with protein-disrupting FOXP1 variants includes global developmental delay, intellectual disability and mild to severe speech/language deficits. We report on a female child with a history of severe hypotonia, autism spectrum disorder and mild intellectual disability with severe speech/language impairment. Clinical exome sequencing identified a heterozygous de novo FOXP1 variant c.1267_1268delGT (p.V423Hfs*37). Functional analyses using cellular models show that the variant disrupts multiple aspects of FOXP1 activity, including subcellular localization and transcriptional repression properties. Our findings highlight the importance of performing functional characterization to help uncover the biological significance of variants identified by genomics approaches, thereby providing insight into pathways underlying complex neurodevelopmental disorders. Moreover, our data support the hypothesis that de novo variants represent significant causal factors in severe sporadic disorders and extend the phenotype seen in individuals with FOXP1 haploinsufficiency
  • Lutzenberger, H., Casillas, M., Fikkert, P., Crasborn, O., & De Vos, C. (2024). More than looks: Exploring methods to test phonological discrimination in the sign language Kata Kolok. Language Learning and Development. Advance online publication. doi:10.1080/15475441.2023.2277472.

    Abstract

    The lack of diversity in the language sciences has increasingly been criticized as it holds the potential for producing flawed theories. Research on (i) geographically diverse language communities and (ii) on sign languages is necessary to corroborate, sharpen, and extend existing theories. This study contributes a case study of adapting a well-established paradigm to study the acquisition of sign phonology in Kata Kolok, a sign language of rural Bali, Indonesia. We conducted an experiment modeled after the familiarization paradigm with child signers of Kata Kolok. Traditional analyses of looking time did not yield significant differences between signing and non-signing children. Yet, additional behavioral analyses (attention, eye contact, hand behavior) suggest that children who are signers and those who are non-signers, as well as those who are hearing and those who are deaf, interact differently with the task. This study suggests limitations of the paradigm due to the ecology of sign languages and the sociocultural characteristics of the sample, calling for a mixed-methods approach. Ultimately, this paper aims to elucidate the diversity of adaptations necessary for experimental design, procedure, and analysis, and to offer a critical reflection on the contribution of similar efforts and the diversification of the field.
  • Lutzenberger, H., De Wael, L., Omardeen, R., & Dingemanse, M. (2024). Interactional infrastructure across modalities: A comparison of repair initiators and continuers in British Sign Language and British English. Sign Language Studies, 24(3), 548-581. doi:10.1353/sls.2024.a928056.

    Abstract

    Minimal expressions are at the heart of interaction: Interjections like "Huh?" and "Mhm" keep conversations flowing by establishing and reinforcing intersubjectivity among interlocutors. Crosslinguistic research has identified that similar interactional pressures can yield structurally similar words (e.g., to initiate repair across languages). While crosslinguistic comparisons that include signed languages remain uncommon, recent work has revealed similarities in discourse management strategies among signers and speakers that share much of their cultural background. This study contributes a crossmodal comparison of repair initiators and continuers in speakers of English and signers of British Sign Language (BSL). We combine qualitative and quantitative analyses of data from sixteen English speakers and sixteen BSL signers, resulting in the following: First, the interactional infrastructure drawn upon by speakers and signers overwhelmingly relies on behaviors of the head, face, and body; these are used alone or sometimes in combination with verbal elements (i.e., spoken words or manual signs), while verbal strategies alone are rare. Second, discourse management strategies are remarkably similar in form across the two languages: A held eye gaze or freeze-look is the predominant repair initiator and head nodding the main continuer. These results suggest a modality-agnostic preference for visual strategies that do not occupy the primary articulators, one that we propose is founded in recipiency; people maintain the flow of communication following principles of minimal effort and minimal interruption.
  • Maess, B., Friederici, A. D., Damian, M., Meyer, A. S., & Levelt, W. J. M. (2002). Semantic category interference in overt picture naming: Sharpening current density localization by PCA. Journal of Cognitive Neuroscience, 14(3), 455-462. doi:10.1162/089892902317361967.

    Abstract

    The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same- or mixedcategory items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixedcategory condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production.
  • Mai, A., Riès, S., Ben-Haim, S., Shih, J. J., & Gentner, T. Q. (2024). Acoustic and language-specific sources for phonemic abstraction from speech. Nature Communications, 15: 677. doi:10.1038/s41467-024-44844-9.

    Abstract

    Spoken language comprehension requires abstraction of linguistic information from speech, but the interaction between auditory and linguistic processing of speech remains poorly understood. Here, we investigate the nature of this abstraction using neural responses recorded intracranially while participants listened to conversational English speech. Capitalizing on multiple, language-specific patterns where phonological and acoustic information diverge, we demonstrate the causal efficacy of the phoneme as a unit of analysis and dissociate the unique contributions of phonemic and spectrographic information to neural responses. Quantitive higher-order response models also reveal that unique contributions of phonological information are carried in the covariance structure of the stimulus-response relationship. This suggests that linguistic abstraction is shaped by neurobiological mechanisms that involve integration across multiple spectro-temporal features and prior phonological information. These results link speech acoustics to phonology and morphosyntax, substantiating predictions about abstractness in linguistic theory and providing evidence for the acoustic features that support that abstraction.

    Additional information

    supplementary information
  • Majid, A., & Van Staden, M. (2015). Can nomenclature for the body be explained by embodiment theories? Topics in Cognitive Science, 7(4), 570-594. doi:10.1111/tops.12159.

    Abstract

    According to widespread opinion, the meaning of body part terms is determined by salient discontinuities in the visual image; such that hands, feet, arms, and legs, are natural parts. If so, one would expect these parts to have distinct names which correspond in meaning across languages. To test this proposal, we compared three unrelated languages—Dutch, Japanese, and Indonesian—and found both naming systems and boundaries of even basic body part terms display variation across languages. Bottom-up cues alone cannot explain natural language semantic systems; there simply is not a one-to-one mapping of the body semantic system to the body structural description. Although body parts are flexibly construed across languages, body parts semantics are, nevertheless, constrained by non-linguistic representations in the body structural description, suggesting these are necessary, although not sufficient, in accounting for aspects of the body lexicon.
  • Majid, A. (2015). Cultural factors shape olfactory language. Trends in Cognitive Sciences, 19(11), 629-630. doi:10.1016/j.tics.2015.06.009.
  • Majid, A. (2002). Frames of reference and language concepts. Trends in Cognitive Sciences, 6(12), 503-504. doi:10.1016/S1364-6613(02)02024-7.
  • Majid, A., Jordan, F., & Dunn, M. (Eds.). (2015). Semantic systems in closely related languages [Special Issue]. Language Sciences, 49.
  • Majid, A., Jordan, F., & Dunn, M. (2015). Semantic systems in closely related languages. Language Sciences, 49, 1-18. doi:10.1016/j.langsci.2014.11.002.

    Abstract

    In each semantic domain studied to date, there is considerable variation in how meanings are expressed across languages. But are some semantic domains more likely to show variation than others? Is the domain of space more or less variable in its expression than other semantic domains, such as containers, body parts, or colours? According to many linguists, the meanings expressed in grammaticised expressions, such as (spatial) adpositions, are more likely to be similar across languages than meanings expressed in open class lexical items. On the other hand, some psychologists predict there ought to be more variation across languages in the meanings of adpositions, than in the meanings of nouns. This is because relational categories, such as those expressed as adpositions, are said to be constructed by language; whereas object categories expressed as nouns are predicted to be “given by the world”. We tested these hypotheses by comparing the semantic systems of closely related languages. Previous cross-linguistic studies emphasise the importance of studying diverse languages, but we argue that a focus on closely related languages is advantageous because domains can be compared in a culturally- and historically-informed manner. Thus we collected data from 12 Germanic languages. Naming data were collected from at least 20 speakers of each language for containers, body-parts, colours, and spatial relations. We found the semantic domains of colour and body-parts were the most similar across languages. Containers showed some variation, but spatial relations expressed in adpositions showed the most variation. The results are inconsistent with the view expressed by most linguists. Instead, we find meanings expressed in grammaticised meanings are more variable than meanings in open class lexical items.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2002). The influence of animacy on relative clause processing. Journal of Memory and Language, 47(1), 50-68. doi:10.1006/jmla.2001.2837.

    Abstract

    In previous research it has been shown that subject relative clauses are easier to process than object relative clauses. Several theories have been proposed that explain the difference on the basis of different theoretical perspectives. However, previous research tested relative clauses only with animate protagonists. In a corpus study of Dutch and German newspaper texts, we show that animacy is an important determinant of the distribution of subject and object relative clauses. In two experiments in Dutch, in which the animacy of the object of the relative clause is varied, no difference in reading time is obtained between subject and object relative clauses when the object is inanimate. The experiments show that animacy influences the processing difficulty of relative clauses. These results can only be accounted for by current major theories of relative clause processing when additional assumptions are introduced, and at the same time show that the possibility of semantically driven analysis can be considered as a serious alternative.
  • Manrique, E., & Enfield, N. J. (2015). Suspending the next turn as a form of repair initiation: Evidence from Argentine Sign Language. Frontiers in Psychology, 6: 1326. doi:10.3389/fpsyg.2015.01326.

    Abstract

    Practices of other initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Sehas Argentina or LSA). We describe a type of response termed a "freeze-look,' which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a "thinking" face or hesitation, etc.). We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The "freeze-look" results in the questioner "re-doing" their action of asking a question, for example by repeating or rephrasing it Thus, we argue that the "freeze-look" is a practice for other-initiation of repair. In addition, we argue that it is an "off-record" practice, thus contrasting with known on record practices such as saying "Huh?" or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as wel

    Additional information

    Manrique_Enfield_2015_supp.pdf
  • Marlow, A. J., Fisher, S. E., Richardson, A. J., Francks, C., Talcott, J. B., Monaco, A. P., Stein, J. F., & Cardon, L. R. (2002). Investigation of quantitative measures related to reading disability in a large sample of sib-pairs from the UK. Behavior Genetics, 31(2), 219-230. doi:10.1023/A:1010209629021.

    Abstract

    We describe a family-based sample of individuals with reading disability collected as part of a quantitative trait loci (QTL) mapping study. Eighty-nine nuclear families (135 independent sib-pairs) were identified through a single proband using a traditional discrepancy score of predicted/actual reading ability and a known family history. Eight correlated psychometric measures were administered to each sibling, including single word reading, spelling, similarities, matrices, spoonerisms, nonword and irregular word reading, and a pseudohomophone test. Summary statistics for each measure showed a reduced mean for the probands compared to the co-sibs, which in turn was lower than that of the population. This partial co-sib regression back to the mean indicates that the measures are influenced by familial factors and therefore, may be suitable for a mapping study. The variance of each of the measures remained largely unaffected, which is reassuring for the application of a QTL approach. Multivariate genetic analysis carried out to explore the relationship between the measures identified a common factor between the reading measures that accounted for 54% of the variance. Finally the familiality estimates (range 0.32–0.73) obtained for the reading measures including the common factor (0.68) supported their heritability. These findings demonstrate the viability of this sample for QTL mapping, and will assist in the interpretation of any subsequent linkage findings in an ongoing genome scan.
  • Martin, J.-R., Kösem, A., & van Wassenhove, V. (2015). Hysteresis in Audiovisual Synchrony Perception. PLoS One, 10(3): e0119365. doi:10.1371/journal.pone.0119365.

    Abstract

    The effect of stimulation history on the perception of a current event can yield two opposite effects, namely: adaptation or hysteresis. The perception of the current event thus goes in the opposite or in the same direction as prior stimulation, respectively. In audiovisual (AV) synchrony perception, adaptation effects have primarily been reported. Here, we tested if perceptual hysteresis could also be observed over adaptation in AV timing perception by varying different experimental conditions. Participants were asked to judge the synchrony of the last (test) stimulus of an AV sequence with either constant or gradually changing AV intervals (constant and dynamic condition, respectively). The onset timing of the test stimulus could be cued or not (prospective vs. retrospective condition, respectively). We observed hysteretic effects for AV synchrony judgments in the retrospective condition that were independent of the constant or dynamic nature of the adapted stimuli; these effects disappeared in the prospective condition. The present findings suggest that knowing when to estimate a stimulus property has a crucial impact on perceptual simultaneity judgments. Our results extend beyond AV timing perception, and have strong implications regarding the comparative study of hysteresis and adaptation phenomena.
  • Matić, D., & Odé, C. (2015). On prosodic signalling of focus in Tundra Yukaghir. Acta Linguistica Petropolitana, 11(2), 627-644.
  • Mauner, G., Melinger, A., Koenig, J.-P., & Bienvenue, B. (2002). When is schematic participant information encoded: Evidence from eye-monitoring. Journal of Memory and Language, 47(3), 386-406. doi:10.1016/S0749-596X(02)00009-8.

    Abstract

    Two eye-monitoring studies examined when unexpressed schematic participant information specified by verbs is used during sentence processing. Experiment 1 compared the processing of sentences with passive and intransitive verbs hypothesized to introduce or not introduce, respectively, an agent when their main clauses were preceded by either agent-dependent rationale clauses or adverbial clause controls. While there were no differences in the processing of passive clauses following rationale and control clauses, intransitive verb clauses elicited anomaly effects following agent-dependent rationale clauses. To determine whether the source of this immediately available schematic participant information is lexically specified or instead derived solely from conceptual sources associated with verbs, Experiment 2 compared the processing of clauses with passive and middle verbs following rationale clauses (e.g., To raise money for the charity, the vase was/had sold quickly…). Although both passive and middle verb forms denote situations that logically require an agent, middle verbs, which by hypothesis do not lexically specify an agent, elicited longer processing times than passive verbs in measures of early processing. These results demonstrate that participants access and interpret lexically encoded schematic participant information in the process of recognizing a verb.
  • Mazzini, S., Yadnik, S., Timmers, I., Rubio-Gozalbo, E., & Jansma, B. M. (2024). Altered neural oscillations in classical galactosaemia during sentence production. Journal of Inherited Metabolic Disease. Advance online publication. doi:10.1002/jimd.12740.

    Abstract

    Classical galactosaemia (CG) is a hereditary disease in galactose metabolism that despite dietary treatment is characterized by a wide range of cognitive deficits, among which is language production. CG brain functioning has been studied with several neuroimaging techniques, which revealed both structural and functional atypicalities. In the present study, for the first time, we compared the oscillatory dynamics, especially the power spectrum and time–frequency representations (TFR), in the electroencephalography (EEG) of CG patients and healthy controls while they were performing a language production task. Twenty-one CG patients and 19 healthy controls described animated scenes, either in full sentences or in words, indicating two levels of complexity in syntactic planning. Based on previous work on the P300 event related potential (ERP) and its relation with theta frequency, we hypothesized that the oscillatory activity of patients and controls would differ in theta power and TFR. With regard to behavior, reaction times showed that patients are slower, reflecting the language deficit. In the power spectrum, we observed significant higher power in patients in delta (1–3 Hz), theta (4–7 Hz), beta (15–30 Hz) and gamma (30–70 Hz) frequencies, but not in alpha (8–12 Hz), suggesting an atypical oscillatory profile. The time-frequency analysis revealed significantly weaker event-related theta synchronization (ERS) and alpha desynchronization (ERD) in patients in the sentence condition. The data support the hypothesis that CG language difficulties relate to theta–alpha brain oscillations.

    Additional information

    table S1 and S2
  • Meekings, S., Boebinger, D., Evans, S., Lima, C. F., Chen, S., Ostarek, M., & Scott, S. K. (2015). Do we know what we’re saying? The roles of attention and sensory information during speech production. Psychological Science, 26(12), 1975-1977. doi:10.1177/0956797614563766.
  • Meinhardt, E., Mai, A., Baković, E., & McCollum, A. (2024). Weak determinism and the computational consequences of interaction. Natural Language & Linguistic Theory. Advance online publication. doi:10.1007/s11049-023-09578-1.

    Abstract

    Recent work has claimed that (non-tonal) phonological patterns are subregular (Heinz 2011a,b, 2018; Heinz and Idsardi 2013), occupying a delimited proper subregion of the regular functions—the weakly deterministic (WD) functions (Heinz and Lai 2013; Jardine 2016). Whether or not it is correct (McCollum et al. 2020a), this claim can only be properly assessed given a complete and accurate definition of WD functions. We propose such a definition in this article, patching unintended holes in Heinz and Lai’s (2013) original definition that we argue have led to the incorrect classification of some phonological patterns as WD. We start from the observation that WD patterns share a property that we call unbounded semiambience, modeled after the analogous observation by Jardine (2016) about non-deterministic (ND) patterns and their unbounded circumambience. Both ND and WD functions can be broken down into compositions of deterministic (subsequential) functions (Elgot and Mezei 1965; Heinz and Lai 2013) that read an input string from opposite directions; we show that WD functions are those for which these deterministic composands do not interact in a way that is familiar from the theoretical phonology literature. To underscore how this concept of interaction neatly separates the WD class of functions from the strictly more expressive ND class, we provide analyses of the vowel harmony patterns of two Eastern Nilotic languages, Maasai and Turkana, using bimachines, an automaton type that represents unbounded bidirectional dependencies explicitly. These analyses make clear that there is interaction between deterministic composands when (and only when) the output of a given input element of a string is simultaneously dependent on information from both the left and the right: ND functions are those that involve interaction, while WD functions are those that do not.
  • Meira, S., & Drude, S. (2015). A summary reconstruction of Proto-Maweti-Guarani segmental phonology. Boletim do Museu Paraense Emilio Goeldi:Ciencias Humanas, 10, 275-296. doi: 10.1590/1981-81222015000200005.

    Abstract

    This paper presents a succinct reconstruction of the segmental phonology of Proto-Maweti-Guarani, the hypothetical protolanguage from which modern Mawe, Aweti and the Tupi-Guarani branches of the Tupi linguistic family have evolved. Based on about 300 cognate sets from the authors' field data (for Mawe and Aweti) and from Mello's reconstruction (2000) for Proto-Tupi-Guarani (with additional information from other works; and with a few changes concerning certain doubtful features, such as the status of stem-final lenis consonants ∗r and ∗β, and the distinction of ∗c and ∗č), the consonants and vowels of Proto-Maweti-Guarani were reconstructed with the help of the traditional historical-comparative method. The development of the reconstructed segments is then traced from the protolanguage to each of the modern branches. A comparison with other claims made about Proto-Maweti-Guarani is given in the conclusion
  • Melinger, A. (2002). Foot structure and accent in Seneca. International Journal of American Linguistics, 68(3), 287-315.

    Abstract

    Argues that the Seneca accent system can be explained more simply and naturally if the foot structure is reanalyzed as trochaic. Determination of the position of the accent by the position and structure of the accented syllable and by the position and structure of the post-tonic syllable; Assignment of the pair of syllables which interact to predict where accent is assigned in different iambic feet.
  • Menks, W. M., Ekerdt, C., Lemhöfer, K., Kidd, E., Fernández, G., McQueen, J. M., & Janzen, G. (2024). Developmental changes in brain activation during novel grammar learning in 8-25-year-olds. Developmental Cognitive Neuroscience, 66: 101347. doi:10.1016/j.dcn.2024.101347.

    Abstract

    While it is well established that grammar learning success varies with age, the cause of this developmental change is largely unknown. This study examined functional MRI activation across a broad developmental sample of 165 Dutch-speaking individuals (8-25 years) as they were implicitly learning a new grammatical system. This approach allowed us to assess the direct effects of age on grammar learning ability while exploring its neural correlates. In contrast to the alleged advantage of children language learners over adults, we found that adults outperformed children. Moreover, our behavioral data showed a sharp discontinuity in the relationship between age and grammar learning performance: there was a strong positive linear correlation between 8 and 15.4 years of age, after which age had no further effect. Neurally, our data indicate two important findings: (i) during grammar learning, adults and children activate similar brain regions, suggesting continuity in the neural networks that support initial grammar learning; and (ii) activation level is age-dependent, with children showing less activation than older participants. We suggest that these age-dependent processes may constrain developmental effects in grammar learning. The present study provides new insights into the neural basis of age-related differences in grammar learning in second language acquisition.

    Additional information

    supplement
  • Meyer, A. S. (1992). Investigation of phonological encoding through speech error analyses: Achievements, limitations, and alternatives. Cognition, 42, 181-211. doi:10.1016/0010-0277(92)90043-H.

    Abstract

    Phonological encoding in language production can be defined as a set of processes generating utterance forms on the basis of semantic and syntactic information. Most evidence about these processes stems from analyses of sound errors. In section 1 of this paper, certain important results of these analyses are reviewed. Two prominent models of phonological encoding, which are mainly based on speech error evidence, are discussed in section 2. In section 3, limitations of speech error analyses are discussed, and it is argued that detailed and comprehensive models of phonological encoding cannot be derived solely on the basis of error analyses. As is argued in section 4, a new research strategy is required. Instead of using the properties of errors to draw inferences about the generation of correct word forms, future research should directly investigate the normal process of phonological encoding.
  • Meyer, A. S., & Bock, K. (1992). The tip-of-the-tongue phenomenon: Blocking or partial activation? Memory and Cognition, 20, 181-211.

    Abstract

    Tip-of-the-tongue states may represent the momentary unavailability of an otherwise accessible word or the weak activation of an otherwise inaccessible word. In three experiments designed to address these alternative views, subjects attempted to retrieve rare target words from their definitions. The definitions were followed by cues that were related to the targets in sound, by cues that were related in meaning, and by cues that were not related to the targets. Experiment 1 found that compared with unrelated cues, related cue words that were presented immediately after target definitions helped rather than hindered lexical retrieval, and that sound cues were more effective retrieval aids than meaning cues. Experiment 2 replicated these results when cues were presented after an initial target-retrieval attempt. These findings reverse a previous one (Jones, 1989) that was reproduced in Experiment 3 and shown to stem from a small group of unusually difficult target definitions.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Mickan, A., Slesareva, E., McQueen, J. M., & Lemhöfer, K. (2024). New in, old out: Does learning a new language make you forget previously learned foreign languages? Quarterly Journal of Experimental Psychology, 77(3), 530-550. doi:10.1177/17470218231181380.

    Abstract

    Anecdotal evidence suggests that learning a new foreign language (FL) makes you forget previously learned FLs. To seek empirical evidence for this claim, we tested whether learning words in a previously unknown L3 hampers subsequent retrieval of their L2 translation equivalents. In two experiments, Dutch native speakers with knowledge of English (L2), but not Spanish (L3), first completed an English vocabulary test, based on which 46 participant-specific, known English words were chosen. Half of those were then learned in Spanish. Finally, participants’ memory for all 46 English words was probed again in a picture naming task. In Experiment 1, all tests took place within one session. In Experiment 2, we separated the English pre-test from Spanish learning by a day and manipulated the timing of the English post-test (immediately after learning vs. 1 day later). By separating the post-test from Spanish learning, we asked whether consolidation of the new Spanish words would increase their interference strength. We found significant main effects of interference in naming latencies and accuracy: Participants speeded up less and were less accurate to recall words in English for which they had learned Spanish translations, compared with words for which they had not. Consolidation time did not significantly affect these interference effects. Thus, learning a new language indeed comes at the cost of subsequent retrieval ability in other FLs. Such interference effects set in immediately after learning and do not need time to emerge, even when the other FL has been known for a long time.

    Additional information

    supplementary material
  • Mielcarek, M., Toczek, M., Smeets, C. J. L. M., Franklin, S. A., Bondulich, M. K., Jolinon, N., Muller, T., Ahmed, M., Dick, J. R. T., Piotrowska, I., Greensmith, L., Smolenski, R. T., & Bates, G. P. (2015). HDAC4-Myogenin Axis As an Important Marker of HD-Related Skeletal Muscle Atrophy. PLoS Genetics, 11(3): e1005021. doi:10.1371/journal.pgen.1005021.

    Abstract

    Skeletal muscle remodelling and contractile dysfunction occur through both acute and chronic disease processes. These include the accumulation of insoluble aggregates of mis- folded amyloid proteins that is a pathological feature of Huntington ’ s disease (HD). While HD has been described primarily as a neurological disease, HD patients ’ exhibit pro- nounced skeletal muscle atrophy. Given that huntingtin is a ubiquitously expressed protein, skeletal muscle fibres may be at risk of a cell autonomous HD-related dysfunction. However the mechanism leading to skeletal muscle abnormalities in the clinical and pre-clinical HD settings remains unknown. To unravel this mechanism, we employed the R6/2 transgenic and Hdh Q150 knock-in mouse models of HD. We found that symptomatic animals devel- oped a progressive impairment of the contractile characteristics of the hind limb muscles tibialis anterior (TA) and extensor digitorum longus (EDL), accompanied by a significant loss of motor units in the EDL. In symptomatic animals, these pronounced functional changes were accompanied by an aberrant deregulation of contractile protein transcripts and their up-stream transcriptional regulators. In addition, HD mouse models develop a sig- nificant reduction in muscle force, possibly as a result of a deterioration in energy metabo- lism and decreased oxidation that is accompanied by the re-expression of the HDAC4- DACH2-myogenin axis. These results show that muscle dysfunction is a key pathological feature of HD.
  • Monaghan, P., Mattock, K., Davies, R., & Smith, A. C. (2015). Gavagai is as gavagai does: Learning nouns and verbs from cross-situational statistics. Cognitive Science, 39, 1099-1112. doi:10.1111/cogs.12186.

    Abstract

    Learning to map words onto their referents is difficult, because there are multiple possibilities for forming these mappings. Cross-situational learning studies have shown that word-object mappings can be learned across multiple situations, as can verbs when presented in a syntactic context. However, these previous studies have presented either nouns or verbs in ambiguous contexts and thus bypass much of the complexity of multiple grammatical categories in speech. We show that noun word-learning in adults is robust when objects are moving, and that verbs can also be learned from similar scenes without additional syntactic information. Furthermore, we show that both nouns and verbs can be acquired simultaneously, thus resolving category-level as well as individual word level ambiguity. However, nouns were learned more accurately than verbs, and we discuss this in light of previous studies investigating the noun advantage in word learning.
  • Moreno, I., De Vega, M., León, I., Bastiaansen, M. C. M., Lewis, A. G., & Magyari, L. (2015). Brain dynamics in the comprehension of action-related language. A time-frequency analysis of mu rhythms. Neuroimage, 109, 50-62. doi:10.1016/j.neuroimage.2015.01.018.

    Abstract

    EEG mu rhythms (8-13Hz) recorded at fronto-central electrodes are generally considered as markers of motor cortical activity in humans, because they are modulated when participants perform an action, when they observe another’s action or even when they imagine performing an action. In this study, we analyzed the time-frequency (TF) modulation of mu rhythms while participants read action language (“You will cut the strawberry cake”), abstract language (“You will doubt the patient´s argument”), and perceptive language (“You will notice the bright day”). The results indicated that mu suppression at fronto-central sites is associated with action language rather than with abstract or perceptive language. Also, the largest difference between conditions occurred quite late in the sentence, while reading the first noun, (contrast Action vs. Abstract), or the second noun following the action verb (contrast Action vs. Perceptive). This suggests that motor activation is associated with the integration of words across the sentence beyond the lexical processing of the action verb. Source reconstruction localized mu suppression associated with action sentences in premotor cortex (BA 6). The present study suggests (1) that the understanding of action language activates motor networks in the human brain, and (2) that this activation occurs online based on semantic integration across multiple words in the sentence.
  • Mulder, K., Dijkstra, T., & Baayen, R. H. (2015). Cross-language activation of morphological relatives in cognates: The role of orthographic overlap and task-related processing. Frontiers in Human Neuroscience, 9: 16. doi:10.3389/fnhum.2015.00016.

    Abstract

    We considered the role of orthography and task-related processing mechanisms in the activation of morphologically related complex words during bilingual word processing. So far, it has only been shown that such morphologically related words (i.e., morphological family members) are activated through the semantic and morphological overlap they share with the target word. In this study, we investigated family size effects in Dutch-English identical cognates (e.g., tent in both languages), non-identical cognates (e.g., pil and pill, in English and Dutch, respectively), and non-cognates (e.g., chicken in English). Because of their cross-linguistic overlap in orthography, reading a cognate can result in activation of family members both languages. Cognates are therefore well-suited for studying mechanisms underlying bilingual activation of morphologically complex words. We investigated family size effects in an English lexical decision task and a Dutch-English language decision task, both performed by Dutch-English bilinguals. English lexical decision showed a facilitatory effect of English and Dutch family size on the processing of English-Dutch cognates relative to English non-cognates. These family size effects were not dependent on cognate type. In contrast, for language decision, in which a bilingual context is created, Dutch and English family size effects were inhibitory. Here, the combined family size of both languages turned out to better predict reaction time than the separate family size in Dutch or English. Moreover, the combined family size interacted with cognate type: the response to identical cognates was slowed by morphological family members in both languages. We conclude that (1) family size effects are sensitive to the task performed on the lexical items, and (2) depend on both semantic and formal aspects of bilingual word processing. We discuss various mechanisms that can explain the observed family size effects in a spreading activation framework.
  • Neger, T. M., Janse, E., & Rietveld, T. (2015). Correlates of older adults' discrimination of acoustic properties in speech. Speech, Language and Hearing, 18(2), 102-115. doi:10.1179/2050572814Y.0000000055.

    Abstract

    Auditory discrimination of speech stimuli is an essential tool in speech and language therapy, e.g., in dysarthria rehabilitation. It is unclear, however, which listener characteristics are associated with the ability to perceive differences between one's own utterance and target speech. Knowledge about such associations may help to support patients participating in speech and language therapy programs that involve auditory discrimination tasks.
    Discrimination performance was evaluated in 96 healthy participants over 60 years of age as individuals with dysarthria are typically in this age group. Participants compared meaningful words and sentences on the dimensions of loudness, pitch and speech rate. Auditory abilities were assessed using pure-tone audiometry, speech audiometry and speech understanding in noise. Cognitive measures included auditory short-term memory, working memory and processing speed. Linguistic functioning was assessed by means of vocabulary knowledge and language proficiency.
    Exploratory factor analyses showed that discrimination performance was primarily associated with cognitive and linguistic skills, rather than auditory abilities. Accordingly, older adults’ discrimination performance was mainly predicted by cognitive and linguistic skills. Discrimination accuracy was higher in older adults with better speech understanding in noise, faster processing speed, and better language proficiency, but accuracy decreased with age. This raises the question whether these associations generalize to clinical populations and, if so, whether patients with better cognitive or linguistic skills may benefit more from discrimination-based therapeutic approaches than patients with poorer cognitive or linguistic abilities.
  • Newbury, D. F., Cleak, J. D., Ishikawa-Brush, Y., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Bolton, P. F., Jannoun, L., Slonims, V., Baird, G., Pickles, A., Bishop, D. V. M., Helms., P. J., & The SLI Consortium (2002). A genomewide scan identifies two novel loci involved in specific language impairment. American Journal of Human Genetics, 70(2), 384-398. doi:10.1086/338649.

    Abstract

    Approximately 4% of English-speaking children are affected by specific language impairment (SLI), a disorder in the development of language skills despite adequate opportunity and normal intelligence. Several studies have indicated the importance of genetic factors in SLI; a positive family history confers an increased risk of development, and concordance in monozygotic twins consistently exceeds that in dizygotic twins. However, like many behavioral traits, SLI is assumed to be genetically complex, with several loci contributing to the overall risk. We have compiled 98 families drawn from epidemiological and clinical populations, all with probands whose standard language scores fall ⩾1.5 SD below the mean for their age. Systematic genomewide quantitative-trait–locus analysis of three language-related measures (i.e., the Clinical Evaluation of Language Fundamentals–Revised [CELF-R] receptive and expressive scales and the nonword repetition [NWR] test) yielded two regions, one on chromosome 16 and one on 19, that both had maximum LOD scores of 3.55. Simulations suggest that, of these two multipoint results, the NWR linkage to chromosome 16q is the most significant, with empirical P values reaching 10−5, under both Haseman-Elston (HE) analysis (LOD score 3.55; P=.00003) and variance-components (VC) analysis (LOD score 2.57; P=.00008). Single-point analyses provided further support for involvement of this locus, with three markers, under the peak of linkage, yielding LOD scores >1.9. The 19q locus was linked to the CELF-R expressive-language score and exceeds the threshold for suggestive linkage under all types of analysis performed—multipoint HE analysis (LOD score 3.55; empirical P=.00004) and VC (LOD score 2.84; empirical P=.00027) and single-point HE analysis (LOD score 2.49) and VC (LOD score 2.22). Furthermore, both the clinical and epidemiological samples showed independent evidence of linkage on both chromosome 16q and chromosome 19q, indicating that these may represent universally important loci in SLI and, thus, general risk factors for language impairment.
  • Newbury, D. F., Bonora, E., Lamb, J. A., Fisher, S. E., Lai, C. S. L., Baird, G., Jannoun, L., Slonims, V., Stott, C. M., Merricks, M. J., Bolton, P. F., Bailey, A. J., Monaco, A. P., & International Molecular Genetic Study of Autism Consortium (2002). FOXP2 is not a major susceptibility gene for autism or specific language impairment. American Journal of Human Genetics, 70(5), 1318-1327. doi:10.1086/339931.

    Abstract

    The FOXP2 gene, located on human 7q31 (at the SPCH1 locus), encodes a transcription factor containing a polyglutamine tract and a forkhead domain. FOXP2 is mutated in a severe monogenic form of speech and language impairment, segregating within a single large pedigree, and is also disrupted by a translocation in an isolated case. Several studies of autistic disorder have demonstrated linkage to a similar region of 7q (the AUTS1 locus), leading to the proposal that a single genetic factor on 7q31 contributes to both autism and language disorders. In the present study, we directly evaluate the impact of the FOXP2 gene with regard to both complex language impairments and autism, through use of association and mutation screening analyses. We conclude that coding-region variants in FOXP2 do not underlie the AUTS1 linkage and that the gene is unlikely to play a role in autism or more common forms of language impairment.
  • Niemelä, P. T., Lattenkamp, E. Z., & Dingemanse, N. J. (2015). Personality-related survival and sampling bias in wild cricket nymphs. Behavioral Ecology, 26(3), 936-946. doi:10.1093/beheco/arv036.

    Abstract

    The study of adaptive individual behavior (“animal personality”) focuses on whether individuals differ consistently in (suites of correlated) behavior(s) and whether individual-level behavior is under selection. Evidence for selection acting on personality is biased toward species where behavioral and life-history information can readily be collected in the wild, such as ungulates and passerine birds. Here, we report estimates of repeatability and syndrome structure for behaviors that an insect (field cricket; Gryllus campestris ) expresses in the wild. We used mark-recapture models to estimate personality-related survival and encounter probability and focused on a life-history phase where all individuals could readily be sampled (the nymphal stage). As proxies for risky behaviors, we assayed maximum distance from burrow, flight initiation distance, and emergence time after disturbance; all behaviors were repeatable, but there was no evidence for strong syndrome structure. Flight initiation distance alone predicted both daily survival and encounter probability: bolder individuals were more easily observed but had a shorter life span. Individuals were also somewhat repeatable in the habitat temperature under which they were assayed. Such environment repeatability can lead to upward biases in estimates of repeatability in behavior; this was not the case. Behavioral assays were, however, conducted around the subject’s personal burrow, which could induce pseudorepeatability if burrow characteristics affected behavior. Follow-up translocation experiments allowed us to distinguish individual and burrow identity effects and provided conclusive evidence for individual repeatability of flight initiation distance. Our findings, therefore, forcefully demonstrate that personality variation exists in wild insects and that it is associated with components of fitness.
  • Nieuwland, M. S. (2015). The truth before and after: Brain potentials reveal automatic activation of event knowledge during sentence comprehension. Journal of Cognitive Neuroscience, 27(11), 2215-2228. doi:10.1162/jocn_a_00856.

    Abstract

    How does knowledge of real-world events shape our understanding of incoming language? Do temporal terms like “before” and “after” impact the online recruitment of real-world event knowledge? These questions were addressed in two ERP experiments, wherein participants read sentences that started with “before” or “after” and contained a critical word that rendered each sentence true or false (e.g., “Before/After the global economic crisis, securing a mortgage was easy/harder”). The critical words were matched on predictability, rated truth value, and semantic relatedness to the words in the sentence. Regardless of whether participants explicitly verified the sentences or not, false-after-sentences elicited larger N400s than true-after-sentences, consistent with the well-established finding that semantic retrieval of concepts is facilitated when they are consistent with real-world knowledge. However, although the truth judgments did not differ between before- and after-sentences, no such sentence N400 truth value effect occurred in before-sentences, whereas false-before-sentences elicited an enhanced subsequent positive ERPs. The temporal term “before” itself elicited more negative ERPs at central electrode channels than “after.” These patterns of results show that, irrespective of ultimate sentence truth value judgments, semantic retrieval of concepts is momentarily facilitated when they are consistent with the known event outcome compared to when they are not. However, this inappropriate facilitation incurs later processing costs as reflected in the subsequent positive ERP deflections. The results suggest that automatic activation of event knowledge can impede the incremental semantic processes required to establish sentence truth value.
  • Nijhoff, A. D., & Willems, R. M. (2015). Simulating fiction: Individual differences in literature comprehension revealed with fMRI. PLoS One, 10(2): e0116492. doi:10.1371/journal.pone.0116492.

    Abstract

    When we read literary fiction, we are transported to fictional places, and we feel and think along with the characters. Despite the importance of narrative in adult life and during development, the neurocognitive mechanisms underlying fiction comprehension are unclear. We used functional magnetic resonance imaging (fMRI) to investigate how individuals differently employ neural networks important for understanding others’ beliefs and intentions (mentalizing), and for sensori-motor simulation while listening to excerpts from literary novels. Localizer tasks were used to localize both the cortical motor network and the mentalizing network in participants after they listened to excerpts from literary novels. Results show that participants who had high activation in anterior medial prefrontal cortex (aMPFC; part of the mentalizing network) when listening to mentalizing content of literary fiction, had lower motor cortex activity when they listened to action-related content of the story, and vice versa. This qualifies how people differ in their engagement with fiction: some people are mostly drawn into a story by mentalizing about the thoughts and beliefs of others, whereas others engage in literature by simulating more concrete events such as actions. This study provides on-line neural evidence for the existence of qualitatively different styles of moving into literary worlds, and adds to a growing body of literature showing the potential to study narrative comprehension with neuroimaging methods.
  • Noordman, L. G. M., & Vonk, W. (1998). Memory-based processing in understanding causal information. Discourse Processes, 191-212. doi:10.1080/01638539809545044.

    Abstract

    The reading process depends both on the text and on the reader. When we read a text, propositions in the current input are matched to propositions in the memory representation of the previous discourse but also to knowledge structures in long‐term memory. Therefore, memory‐based text processing refers both to the bottom‐up processing of the text and to the top‐down activation of the reader's knowledge. In this article, we focus on the role of cognitive structures in the reader's knowledge. We argue that causality is an important category in structuring human knowledge and that this property has consequences for text processing. Some research is discussed that illustrates that the more the information in the text reflects causal categories, the more easily the information is processed.
  • Norcliffe, E., Harris, A., & Jaeger, T. F. (2015). Cross-linguistic psycholinguistics and its critical role in theory development: early beginnings and recent advances. Language, Cognition and Neuroscience, 30(9), 1009-1032. doi:10.1080/23273798.2015.1080373.

    Abstract

    Recent years have seen a small but growing body of psycholinguistic research focused on typologically diverse languages. This represents an important development for the field, where theorising is still largely guided by the often implicit assumption of universality. This paper introduces a special issue of Language, Cognition and Neuroscience devoted to the topic of cross-linguistic and field-based approaches to the study of psycholinguistics. The papers in this issue draw on data from a variety of genetically and areally divergent languages, to address questions in the production and comprehension of phonology, morphology, words, and sentences. To contextualise these studies, we provide an overview of the field of cross-linguistic psycholinguistics, from its early beginnings to the present day, highlighting instances where cross-linguistic data have significantly contributed to psycholinguistic theorising.
  • Norcliffe, E., Konopka, A. E., Brown, P., & Levinson, S. C. (2015). Word order affects the time course of sentence formulation in Tzeltal. Language, Cognition and Neuroscience, 30(9), 1187-1208. doi:10.1080/23273798.2015.1006238.

    Abstract

    The scope of planning during sentence formulation is known to be flexible, as it can be influenced by speakers' communicative goals and language production pressures (among other factors). Two eye-tracked picture description experiments tested whether the time course of formulation is also modulated by grammatical structure and thus whether differences in linear word order across languages affect the breadth and order of conceptual and linguistic encoding operations. Native speakers of Tzeltal [a primarily verb–object–subject (VOS) language] and Dutch [a subject–verb–object (SVO) language] described pictures of transitive events. Analyses compared speakers' choice of sentence structure across events with more accessible and less accessible characters as well as the time course of formulation for sentences with different word orders. Character accessibility influenced subject selection in both languages in subject-initial and subject-final sentences, ruling against a radically incremental formulation process. In Tzeltal, subject-initial word orders were preferred over verb-initial orders when event characters had matching animacy features, suggesting a possible role for similarity-based interference in influencing word order choice. Time course analyses revealed a strong effect of sentence structure on formulation: In subject-initial sentences, in both Tzeltal and Dutch, event characters were largely fixated sequentially, while in verb-initial sentences in Tzeltal, relational information received priority over encoding of either character during the earliest stages of formulation. The results show a tight parallelism between grammatical structure and the order of encoding operations carried out during sentence formulation.
  • Norris, D., McQueen, J. M., & Cutler, A. (2002). Bias effects in facilitatory phonological priming. Memory & Cognition, 30(3), 399-411.

    Abstract

    In four experiments, we examined the facilitation that occurs when spoken-word targets rhyme with preceding spoken primes. In Experiment 1, listeners’ lexical decisions were faster to words following rhyming words (e.g., ramp–LAMP) than to words following unrelated primes (e.g., pink–LAMP). No facilitation was observed for nonword targets. Targets that almost rhymed with their primes (foils; e.g., bulk–SULSH) were included in Experiment 2; facilitation for rhyming targets was severely attenuated. Experiments 3 and 4 were single-word shadowing variants of the earlier experiments. There was facilitation for both rhyming words and nonwords; the presence of foils had no significant influence on the priming effect. A major component of the facilitation in lexical decision appears to be strategic: Listeners are biased to say “yes” to targets that rhyme with their primes, unless foils discourage this strategy. The nonstrategic component of phonological facilitation may reflect speech perception processes that operate prior to lexical access.
  • Nyberg, L., Forkstam, C., Petersson, K. M., Cabeza, R., & Ingvar, M. (2002). Brain imaging of human memory systems: Between-systems similarities and within-system differences. Cognitive Brain Research, 13(2), 281-292. doi:10.1016/S0926-6410(02)00052-6.

    Abstract

    There is much evidence for the existence of multiple memory systems. However, it has been argued that tasks assumed to reflect different memory systems share basic processing components and are mediated by overlapping neural systems. Here we used multivariate analysis of PET-data to analyze similarities and differences in brain activity for multiple tests of working memory, semantic memory, and episodic memory. The results from two experiments revealed between-systems differences, but also between-systems similarities and within-system differences. Specifically, support was obtained for a task-general working-memory network that may underlie active maintenance. Premotor and parietal regions were salient components of this network. A common network was also identified for two episodic tasks, cued recall and recognition, but not for a test of autobiographical memory. This network involved regions in right inferior and polar frontal cortex, and lateral and medial parietal cortex. Several of these regions were also engaged during the working-memory tasks, indicating shared processing for episodic and working memory. Fact retrieval and synonym generation were associated with increased activity in left inferior frontal and middle temporal regions and right cerebellum. This network was also associated with the autobiographical task, but not with living/non-living classification, and may reflect elaborate retrieval of semantic information. Implications of the present results for the classification of memory tasks with respect to systems and/or processes are discussed.
  • Oblong, L. M., Soheili-Nezhad, S., Trevisan, N., Shi, Y., Beckmann, C. F., & Sprooten, E. (2024). Principal and independent genomic components of brain structure and function. Genes, Brain and Behavior, 23(1): e12876. doi:10.1111/gbb.12876.

    Abstract

    The highly polygenic and pleiotropic nature of behavioural traits, psychiatric disorders and structural and functional brain phenotypes complicate mechanistic interpretation of related genome-wide association study (GWAS) signals, thereby obscuring underlying causal biological processes. We propose genomic principal and independent component analysis (PCA, ICA) to decompose a large set of univariate GWAS statistics of multimodal brain traits into more interpretable latent genomic components. Here we introduce and evaluate this novel methods various analytic parameters and reproducibility across independent samples. Two UK Biobank GWAS summary statistic releases of 2240 imaging-derived phenotypes (IDPs) were retrieved. Genome-wide beta-values and their corresponding standard-error scaled z-values were decomposed using genomic PCA/ICA. We evaluated variance explained at multiple dimensions up to 200. We tested the inter-sample reproducibility of output of dimensions 5, 10, 25 and 50. Reproducibility statistics of the respective univariate GWAS served as benchmarks. Reproducibility of 10-dimensional PCs and ICs showed the best trade-off between model complexity and robustness and variance explained (PCs: |rz − max| = 0.33, |rraw − max| = 0.30; ICs: |rz − max| = 0.23, |rraw − max| = 0.19). Genomic PC and IC reproducibility improved substantially relative to mean univariate GWAS reproducibility up to dimension 10. Genomic components clustered along neuroimaging modalities. Our results indicate that genomic PCA and ICA decompose genetic effects on IDPs from GWAS statistics with high reproducibility by taking advantage of the inherent pleiotropic patterns. These findings encourage further applications of genomic PCA and ICA as fully data-driven methods to effectively reduce the dimensionality, enhance the signal to noise ratio and improve interpretability of high-dimensional multitrait genome-wide analyses.
  • O'Brien, D. P., & Bowerman, M. (1998). Martin D. S. Braine (1926–1996): Obituary. American Psychologist, 53, 563. doi:10.1037/0003-066X.53.5.563.

    Abstract

    Memorializes Martin D. S. Braine, whose research on child language acquisition and on both child and adult thinking and reasoning had a major influence on modern cognitive psychology. Addressing meaning as well as position, Braine argued that children start acquiring language by learning narrow-scope positional formulas that map components of meaning to positions in the utterance. These proposals were critical in starting discussions of the possible universality of the pivot-grammar stage and of the role of syntax, semantics,and pragmatics in children's early grammar and were pivotal to the rise of approaches in which cognitive development in language acquisition is stressed.
  • Orfanidou, E., McQueen, J. M., Adam, R., & Morgan, G. (2015). Segmentation of British Sign Language (BSL): Mind the gap! Quarterly journal of experimental psychology, 68, 641-663. doi:10.1080/17470218.2014.945467.

    Abstract

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms
  • Ortega, G., & Morgan, G. (2015). Input processing at first exposure to a sign language. Second language research, 19(10), 443-463. doi:10.1177/0267658315576822.

    Abstract

    There is growing interest in learners’ cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back on their L1 to process novel signs because the modality differences between speech (aural–oral) and sign (visual-manual) do not allow for direct cross-linguistic influence. Sign language learners might use alternative strategies to process input expressed in the manual channel. Learners may rely on iconicity, the direct relationship between a sign and its referent. Evidence up to now has shown that iconicity facilitates learning in non-signers, but it is unclear whether it also facilitates sign production. In order to fill this gap, the present study investigated how iconicity influenced articulation of the phonological components of signs. In Study 1, hearing non-signers viewed a set of iconic and arbitrary signs along with their English translations and repeated the signs as accurately as possible immediately after. The results show that participants imitated iconic signs significantly less accurately than arbitrary signs. In Study 2, a second group of hearing non-signers imitated the same set of signs but without the accompanying English translations. The same lower accuracy for iconic signs was observed. We argue that learners rely on iconicity to process manual input because it brings familiarity to the target (sign) language. However, this reliance comes at a cost as it leads to a more superficial processing of the signs’ full phonetic form. The present findings add to our understanding of learners’ cognitive capacities at first exposure to a signed L2, and raises new theoretical questions in the field of second language acquisition
  • Ortega, G., & Morgan, G. (2015). Phonological development in hearing learners of a sign language: The role of sign complexity and iconicity. Language Learning, 65(3), 660-668. doi:10.1111/lang.12123.

    Abstract

    The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that some sign components are produced more accurately than others: Handshape was the most difficult, followed by movement, then orientation, and finally location. Iconic signs were articulated less accurately than arbitrary signs because the direct sign-referent mappings and perhaps their similarity with iconic co-speech gestures prevented learners from focusing on the exact phonological structure of the sign. This study shows that multiple phonological features pose greater demand on the production of the parameters of signs and that iconicity interferes in the exact articulation of their constituents
  • Ortega, G., & Morgan, G. (2015). The effect of sign iconicity in the mental lexicon of hearing non-signers and proficient signers: Evidence of cross-modal priming. Language, Cognition and Neuroscience, 30(5), 574-585. doi:10.1080/23273798.2014.959533.

    Abstract

    The present study investigated the priming effect of iconic signs in the mental lexicon of hearing adults. Non-signers and proficient British Sign Language (BSL) users took part in a cross-modal lexical decision task. The results indicate that iconic signs activated semantically related words in non-signers' lexicon. Activation occurred regardless of the type of referent because signs depicting actions and perceptual features of an object yielded the same response times. The pattern of activation was different in proficient signers because only action signs led to cross-modal activation. We suggest that non-signers process iconicity in signs in the same way as they do gestures, but after acquiring a sign language, there is a shift in the mechanisms used to process iconic manual structures
  • Osiecka, A. N., Fearey, J., Ravignani, A., & Burchardt, L. (2024). Isochrony in barks of Cape fur seal (Arctocephalus pusillus pusillus) pups and adults. Ecology and Evolution, 14(3): e11085. doi:10.1002/ece3.11085.

    Abstract

    Animal vocal communication often relies on call sequences. The temporal patterns of such sequences can be adjusted to other callers, follow complex rhythmic structures or exhibit a metronome-like pattern (i.e., isochronous). How regular are the temporal patterns in animal signals, and what influences their precision? If present, are rhythms already there early in ontogeny? Here, we describe an exploratory study of Cape fur seal (Arctocephalus pusillus pusillus) barks—a vocalisation type produced across many pinniped species in rhythmic, percussive bouts. This study is the first quantitative description of barking in Cape fur seal pups. We analysed the rhythmic structures of spontaneous barking bouts of pups and adult females from the breeding colony in Cape Cross, Namibia. Barks of adult females exhibited isochrony, that is they were produced at fairly regular points in time. Instead, intervals between pup barks were more variable, that is skipping a bark in the isochronous series occasionally. In both age classes, beat precision, that is how well the barks followed a perfect template, was worse when barking at higher rates. Differences could be explained by physiological factors, such as respiration or arousal. Whether, and how, isochrony develops in this species remains an open question. This study provides evidence towards a rhythmic production of barks in Cape fur seal pups and lays the groundwork for future studies to investigate the development of rhythm using multidimensional metrics.
  • Ozaki, Y., Tierney, A., Pfordresher, P. Q., McBride, J., Benetos, E., Proutskova, P., Chiba, G., Liu, F., Jacoby, N., Purdy, S. C., Opondo, P., Fitch, W. T., Hegde, S., Rocamora, M., Thorne, R., Nweke, F., Sadaphal, D. P., Sadaphal, P. M., Hadavi, S., Fujii, S. Ozaki, Y., Tierney, A., Pfordresher, P. Q., McBride, J., Benetos, E., Proutskova, P., Chiba, G., Liu, F., Jacoby, N., Purdy, S. C., Opondo, P., Fitch, W. T., Hegde, S., Rocamora, M., Thorne, R., Nweke, F., Sadaphal, D. P., Sadaphal, P. M., Hadavi, S., Fujii, S., Choo, S., Naruse, M., Ehara, U., Sy, L., Parselelo, M. L., Anglada-Tort, M., Hansen, N. C., Haiduk, F., Færøvik, U., Magalhães, V., Krzyżanowski, W., Shcherbakova, O., Hereld, D., Barbosa, B. S., Correa Varella, M. A., Van Tongeren, M., Dessiatnitchenko, P., Zar Zar, S., El Kahla, I., Muslu, O., Troy, J., Lomsadze, T., Kurdova, D., Tsope, C., Fredriksson, D., Arabadjiev, A., Sarbah, J. P., Arhine, A., Meachair, T. Ó., Silva-Zurita, J., Soto-Silva, I., Millalonco, N. E. M., Ambrazevičius, R., Loui, P., Ravignani, A., Jadoul, Y., Larrouy-Maestri, P., Bruder, C., Teyxokawa, T. P., Kuikuro, U., Natsitsabui, R., Sagarzazu, N. B., Raviv, L., Zeng, M., Varnosfaderani, S. D., Gómez-Cañón, J. S., Kolff, K., Vanden Bos der Nederlanden, C., Chhatwal, M., David, R. M., I Putu Gede Setiawan, Lekakul, G., Borsan, V. N., Nguqu, N., & Savage, P. E. (2024). Globally, songs and instrumental melodies are slower, higher, and use more stable pitches than speech: A Registered Report. Science Advances, 10(20): eadm9797. doi:10.1126/sciadv.adm9797.

    Abstract

    Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a “musi-linguistic” continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech.

    Additional information

    supplementary materials
  • Ozker, M., Yu, L., Dugan, P., Doyle, W., Friedman, D., Devinsky, O., & Flinker, A. (2024). Speech-induced suppression and vocal feedback sensitivity in human cortex. eLife, 13: RP94198. doi:10.7554/eLife.94198.1.

    Abstract

    Across the animal kingdom, neural responses in the auditory cortex are suppressed during vocalization, and humans are no exception. A common hypothesis is that suppression increases sensitivity to auditory feedback, enabling the detection of vocalization errors. This hypothesis has been previously confirmed in non-human primates, however a direct link between auditory suppression and sensitivity in human speech monitoring remains elusive. To address this issue, we obtained intracranial electroencephalography (iEEG) recordings from 35 neurosurgical participants during speech production. We first characterized the detailed topography of auditory suppression, which varied across superior temporal gyrus (STG). Next, we performed a delayed auditory feedback (DAF) task to determine whether the suppressed sites were also sensitive to auditory feedback alterations. Indeed, overlapping sites showed enhanced responses to feedback, indicating sensitivity. Importantly, there was a strong correlation between the degree of auditory suppression and feedback sensitivity, suggesting suppression might be a key mechanism that underlies speech monitoring. Further, we found that when participants produced speech with simultaneous auditory feedback, posterior STG was selectively activated if participants were engaged in a DAF paradigm, suggesting that increased attentional load can modulate auditory feedback sensitivity.
  • Ozyurek, A. (2002). Do speakers design their co-speech gestures for their addresees? The effects of addressee location on representational gestures. Journal of Memory and Language, 46(4), 688-704. doi:10.1006/jmla.2001.2826.

    Abstract

    Do speakers use spontaneous gestures accompanying their speech for themselves or to communicate their message to their addressees? Two experiments show that speakers change the orientation of their gestures depending on the location of shared space, that is, the intersection of the gesture spaces of the speakers and addressees. Gesture orientations change more frequently when they accompany spatial prepositions such as into and out, which describe motion that has a beginning and end point, rather than across, which depicts an unbounded path across space. Speakers change their gestures so that they represent the beginning and end point of motion INTO or OUT by moving into or out of the shared space. Thus, speakers design their gestures for their addressees and therefore use them to communicate. This has implications for the view that gestures are a part of language use as well as for the role of gestures in speech production.
  • Ozyurek, A., Furman, R., & Goldin-Meadow, S. (2015). On the way to language: Event segmentation in homesign and gesture. Journal of Child Language, 42, 64-94. doi:10.1017/S0305000913000512.

    Abstract

    Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages.
  • Pederson, E., Danziger, E., Wilkins, D. G., Levinson, S. C., Kita, S., & Senft, G. (1998). Semantic typology and spatial conceptualization. Language, 74(3), 557-589. doi:10.2307/417793.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech. Journal of Cognitive Neuroscience, 27(12), 2352-2368. doi:10.1162/jocn_a_00865.

    Abstract

    In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.
  • Peeters, D., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition, 136, 64-84. doi:10.1016/j.cognition.2014.10.010.

    Abstract

    A fundamental property of language is that it can be used to refer to entities in the extra-linguistic physical context of a conversation in order to establish a joint focus of attention on a referent. Typological and psycholinguistic work across a wide range of languages has put forward at least two different theoretical views on demonstrative reference. Here we contrasted and tested these two accounts by investigating the electrophysiological brain activity underlying the construction of indexical meaning in comprehension. In two EEG experiments, participants watched pictures of a speaker who referred to one of two objects using speech and an index-finger pointing gesture. In contrast with separately collected native speakers’ linguistic intuitions, N400 effects showed a preference for a proximal demonstrative when speaker and addressee were in a face-to-face orientation and all possible referents were located in the shared space between them, irrespective of the physical proximity of the referent to the speaker. These findings reject egocentric proximity-based accounts of demonstrative reference, support a sociocentric approach to deixis, suggest that interlocutors construe a shared space during conversation, and imply that the psychological proximity of a referent may be more important than its physical proximity.
  • Perdue, C., & Klein, W. (1992). Why does the production of some learners not grammaticalize? Studies in Second Language Acquisition, 14, 259-272. doi:10.1017/S0272263100011116.

    Abstract

    In this paper we follow two beginning learners of English, Andrea and Santo, over a period of 2 years as they develop means to structure the declarative utterances they produce in various production tasks, and then we look at the following problem: In the early stages of acquisition, both learners develop a common learner variety; during these stages, we see a picture of two learner varieties developing similar regularities determined by the minimal requirements of the tasks we examine. Andrea subsequently develops further morphosyntactic means to achieve greater cohesion in his discourse. But Santo does not. Although we can identify contexts where the grammaticalization of Andrea's production allows him to go beyond the initial constraints of his variety, it is much more difficult to ascertain why Santo, faced with the same constraints in the same contexts, does not follow this path. Some lines of investigation into this problem are then suggested.
  • Perlman, M., Clark, N., & Falck, M. J. (2015). Iconic prosody in story reading. Cognitive Science, 39(6), 1348-1368. doi:10.1111/cogs.12190.

    Abstract

    Recent experiments have shown that people iconically modulate their prosody corresponding with the meaning of their utterance (e.g., Shintel et al., 2006). This article reports findings from a story reading task that expands the investigation of iconic prosody to abstract meanings in addition to concrete ones. Participants read stories that contrasted along concrete and abstract semantic dimensions of speed (e.g., a fast drive, slow career progress) and size (e.g., a small grasshopper, an important contract). Participants read fast stories at a faster rate than slow stories, and big stories with a lower pitch than small stories. The effect of speed was distributed across the stories, including portions that were identical across stories, whereas the size effect was localized to size-related words. Overall, these findings enrich the documentation of iconicity in spoken language and bear on our understanding of the relationship between gesture and speech.
  • Perlman, M., Dale, R., & Lupyan, G. (2015). Iconicity can ground the creation of vocal symbols. Royal Society Open Science, 2: 150152. doi:10.1098/rsos.150152.

    Abstract

    Studies of gestural communication systems find that they originate from spontaneously created iconic gestures. Yet, we know little about how people create vocal communication systems, and many have suggested that vocalizations do not afford iconicity beyond trivial instances of onomatopoeia. It is unknown whether people can generate vocal communication systems through a process of iconic creation similar to gestural systems. Here, we examine the creation and development of a rudimentary vocal symbol system in a laboratory setting. Pairs of participants generated novel vocalizations for 18 different meanings in an iterative ‘vocal’ charades communication game. The communicators quickly converged on stable vocalizations, and naive listeners could correctly infer their meanings in subsequent playback experiments. People's ability to guess the meanings of these novel vocalizations was predicted by how close the vocalization was to an iconic ‘meaning template’ we derived from the production data. These results strongly suggest that the meaningfulness of these vocalizations derived from iconicity. Our findings illuminate a mechanism by which iconicity can ground the creation of vocal symbols, analogous to the function of iconicity in gestural communication systems.
  • Perlman, M., & Clark, N. (2015). Learned vocal and breathing behavior in an enculturated gorilla. Animal Cognition, 18(5), 1165-1179. doi:10.1007/s10071-015-0889-6.

    Abstract

    We describe the repertoire of learned vocal and breathing-related behaviors (VBBs) performed by the enculturated gorilla Koko. We examined a large video corpus of Koko and observed 439 VBBs spread across 161 bouts. Our analysis shows that Koko exercises voluntary control over the performance of nine distinctive VBBs, which involve variable coordination of her breathing, larynx, and supralaryngeal articulators like the tongue and lips. Each of these behaviors is performed in the context of particular manual action routines and gestures. Based on these and other findings, we suggest that vocal learning and the ability to exercise volitional control over vocalization, particularly in a multimodal context, might have figured relatively early into the evolution of language, with some rudimentary capacity in place at the time of our last common ancestor with great apes.
  • Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2015). Does space structure spatial language? A comparison of spatial expression across sign languages. Language, 91(3), 611-641.

    Abstract

    The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature of linguistic diversity in spatial encoding in sign languages has not been rigorously investigated by systematic crosslinguistic comparison. Here, we compare locative expression in two unrelated sign languages, Turkish Sign Language (Türk İşaret Dili, TİD) and German Sign Language (Deutsche Gebärdensprache, DGS), focusing on the expression of figure-ground (e.g. cup on table) and figure-figure (e.g. cup next to cup) relationships in a discourse context. In addition to similarities, we report qualitative and quantitative differences between the sign languages in the formal devices used (i.e. unimanual vs. bimanual; simultaneous vs. sequential) and in the degree of iconicity of the spatial devices. Our results suggest that sign languages may display more diversity in the spatial domain than has been previously assumed, and in a way more comparable with the diversity found in spoken languages. The study contributes to a more comprehensive understanding of how space gets encoded in language
  • Perniss, P. M., Ozyurek, A., & Morgan, G. (2015). The Influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture. Topics in Cognitive Science, 7(1), 2-11. doi:10.1111/tops.12127.

    Abstract

    For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems.
  • Perniss, P. M., Ozyurek, A., & Morgan, G. (Eds.). (2015). The influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture [Special Issue]. Topics in Cognitive Science, 7(1). doi:10.1111/tops.12113.
  • Perniss, P. M., & Ozyurek, A. (2015). Visible cohesion: A comparison of reference tracking in sign, speech, and co-speech gesture. Topics in Cognitive Science, 7(1), 36-60. doi:10.1111/tops.12122.

    Abstract

    Establishing and maintaining reference is a crucial part of discourse. In spoken languages, differential linguistic devices mark referents occurring in different referential contexts, that is, introduction, maintenance, and re-introduction contexts. Speakers using gestures as well as users of sign languages have also been shown to mark referents differentially depending on the referential context. This article investigates the modality-specific contribution of the visual modality in marking referential context by providing a direct comparison between sign language (German Sign Language; DGS) and co-speech gesture with speech (German) in elicited narratives. Across all forms of expression, we find that referents in subject position are referred to with more marking material in re-introduction contexts compared to maintenance contexts. Furthermore, we find that spatial modification is used as a modality-specific strategy in both DGS and German co-speech gesture, and that the configuration of referent locations in sign space and gesture space corresponds in an iconic and consistent way to the locations of referents in the narrated event. However, we find that spatial modification is used in different ways for marking re-introduction and maintenance contexts in DGS and German co-speech gesture. The findings are discussed in relation to the unique contribution of the visual modality to reference tracking in discourse when it is used in a unimodal system with full linguistic structure (i.e., as in sign) versus in a bimodal system that is a composite of speech and gesture
  • Perry, L. K., Perlman, M., & Lupyan, G. (2015). Iconicity in English and Spanish and Its Relation to Lexical Category and Age of Acquisition. PLoS One, 10(9): e0137147. doi:10.1371/journal.pone.0137147.

    Abstract

    Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most “arbitrary” spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages.

    Additional information

    1536057.zip

Share this page