Publications

Displaying 301 - 400 of 603
  • Lai, V. T., Garrido Rodriguez, G., & Narasimhan, B. (2014). Thinking-for-speaking in early and late bilinguals. Bilingualism: Language and Cognition, 17, 139-152. doi:10.1017/S1366728913000151.

    Abstract

    When speakers describe motion events using different languages, they subsequently classify those events in language-specific ways (Gennari, Sloman, Malt & Fitch, 2002). Here we ask if bilingual speakers flexibly shift their event classification preferences based on the language in which they verbally encode those events. English–Spanish bilinguals and monolingual controls described motion events in either Spanish or English. Subsequently they judged the similarity of the motion events in a triad task. Bilinguals tested in Spanish and Spanish monolinguals were more likely to make similarity judgments based on the path of motion versus bilinguals tested in English and English monolinguals. The effect is modulated in bilinguals by the age of acquisition of the second language. Late bilinguals based their judgments on path more often when Spanish was used to describe the motion events versus English. Early bilinguals had a path preference independent of the language in use. These findings support “thinking-for-speaking” (Slobin, 1996) in late bilinguals.
  • Lartseva, A., Dijkstra, T., Kan, C. C., & Buitelaar, J. K. (2014). Processing of emotion words by patients with Autism Spectrum Disorders: Evidence from reaction times and EEG. Journal of Autism and Developmental Disorders, 44, 2882-2894. doi:10.1007/s10803-014-2149-z.

    Abstract

    This study investigated processing of emotion words in autism spectrum disorders (ASD) using reaction times and event-related potentials (ERP). Adults with (n = 21) and without (n = 20) ASD performed a lexical decision task on emotion and neutral words while their brain activity was recorded. Both groups showed faster responses to emotion words compared to neutral, suggesting intact early processing of emotion in ASD. In the ERPs, the control group showed a typical late positive component (LPC) at 400-600 ms for emotion words compared to neutral, while the ASD group showed no LPC. The between-group difference in LPC amplitude was significant, suggesting that emotion words were processed differently by individuals with ASD, although their behavioral performance was similar to that of typical individuals
  • Lehtonen, M., Hulten, A., Rodríguez-Fornells, A., Cunillera, T., Tuomainen, J., & Laine, M. (2012). Differences in word recognition between early bilinguals and monolinguals: Behavioral and ERP evidence. Neuropsychologia, 50, 1362-1371. doi:10.1016/j.neuropsychologia.2012.02.021.

    Abstract

    We investigated the behavioral and brain responses (ERPs) of bilingual word recognition to three fundamental psycholinguistic factors, frequency, morphology, and lexicality, in early bilinguals vs. monolinguals. Earlier behavioral studies have reported larger frequency effects in bilingualś nondominant vs. dominant language and in some studies also when compared to corresponding monolinguals. In ERPs, language processing differences between bilinguals vs. monolinguals have typically been found in the N400 component. In the present study, highly proficient Finnish-Swedish bilinguals who had acquired both languages during childhood were compared to Finnish monolinguals during a visual lexical decision task and simultaneous ERP recordings. Behaviorally, we found that the response latencies were overall longer in bilinguals than monolinguals, and that the effects for all three factors, frequency, morphology, and lexicality were also larger in bilinguals even though they had acquired both languages early and were highly proficient in them. In line with this, the N400 effects induced by frequency, morphology, and lexicality were larger for bilinguals than monolinguals. Furthermore, the ERP results also suggest that while most inflected Finnish words are decomposed into stem and suffix, only monolinguals have encountered high frequency inflected word forms often enough to develop full-form representations for them. Larger behavioral and neural effects in bilinguals in these factors likely reflect lower amount of exposure to words compared to monolinguals, as the language input of bilinguals is divided between two languages.
  • Lemhoefer, K., Schriefers, H., & Indefrey, P. (2014). Idiosyncratic Grammars: Syntactic Processing in Second Language Comprehension Uses Subjective Feature Representations. Journal of Cognitive Neuroscience, 26(7), 1428-1444. doi:10.1162/jocn_a_00609.

    Abstract

    Learning the syntax of a second language (L2) often represents a big challenge to L2 learners. Previous research on syntactic processing in L2 has mainly focused on how L2 speakers respond to "objective" syntactic violations, that is, phrases that are incorrect by native standards. In this study, we investigate how L2 learners, in particular those of less than near-native proficiency, process phrases that deviate from their own, "subjective," and often incorrect syntactic representations, that is, whether they use these subjective and idiosyncratic representations during sentence comprehension. We study this within the domain of grammatical gender in a population of German learners of Dutch, for which systematic errors of grammatical gender are well documented. These L2 learners as well as a control group of Dutch native speakers read Dutch sentences containing gender-marked determinernoun phrases in which gender agreement was either (objectively) correct or incorrect. Furthermore, the noun targets were selected such that, in a high proportion of nouns, objective and subjective correctness would differ for German learners. The ERP results show a syntactic violation effect (P600) for objective gender agreement violations for native, but not for nonnative speakers. However, when the items were re-sorted for the L2 speakers according to subjective correctness (as assessed offline), the P600 effect emerged as well. Thus, rather than being insensitive to violations of gender agreement, L2 speakers are similarly sensitive as native speakers but base their sensitivity on their subjective-sometimes incorrect-representations.

    Files private

    Request files
  • Lemhöfer, K., & Broersma, M. (2012). Introducing LexTALE: A quick and valid Lexical Test for Advanced Learners of English. Behavior Research Methods, 44, 325-343. doi:10.3758/s13428-011-0146-0.

    Abstract

    The increasing number of experimental studies on second language (L2) processing, frequently with English as the L2, calls for a practical and valid measure of English vocabulary knowledge and proficiency. In a large-scale study with Dutch and Korean speakers of L2 English, we tested whether LexTALE, a 5-min vocabulary test, is a valid predictor of English vocabulary knowledge and, possibly, even of general English proficiency. Furthermore, the validity of LexTALE was compared with that of self-ratings of proficiency, a measure frequently used by L2 researchers. The results showed the following in both speaker groups: (1) LexTALE was a good predictor of English vocabulary knowledge; 2) it also correlated substantially with a measure of general English proficiency; and 3) LexTALE was generally superior to self-ratings in its predictions. LexTALE, but not self-ratings, also correlated highly with previous experimental data on two word recognition paradigms. The test can be carried out on or downloaded from www.lextale.com.
  • Lensink, S. E., Verdonschot, R. G., & Schiller, N. O. (2014). Morphological priming during language switching: An ERP study. Frontiers in Human Neuroscience, 8: 995. doi:10.3389/fnhum.2014.00995.

    Abstract

    Bilingual language control (BLC) is a much-debated issue in recent literature. Some models assume BLC is achieved by various types of inhibition of the non-target language, whereas other models do not assume any inhibitory mechanisms. In an event-related potential (ERP) study involving a long-lag morphological priming paradigm, participants were required to name pictures and read aloud words in both their L1 (Dutch) and L2 (English). Switch blocks contained intervening L1 items between L2 primes and targets, whereas non-switch blocks contained only L2 stimuli. In non-switch blocks, target picture names that were morphologically related to the primes were named faster than unrelated control items. In switch blocks, faster response latencies were recorded for morphologically related targets as well, demonstrating the existence of morphological priming in the L2. However, only in non-switch blocks, ERP data showed a reduced N400 trend, possibly suggesting that participants made use of a post-lexical checking mechanism during the switch block.
  • Lesage, E., Morgan, B. E., Olson, A. C., Meyer, A. S., & Miall, R. C. (2012). Cerebellar rTMS disrupts predictive language processing. Current Biology, 22, R794-R795. doi:10.1016/j.cub.2012.07.006.

    Abstract

    The human cerebellum plays an important role in language, amongst other cognitive and motor functions [1], but a unifying theoretical framework about cerebellar language function is lacking. In an established model of motor control, the cerebellum is seen as a predictive machine, making short-term estimations about the outcome of motor commands. This allows for flexible control, on-line correction, and coordination of movements [2]. The homogeneous cytoarchitecture of the cerebellar cortex suggests that similar computations occur throughout the structure, operating on different input signals and with different output targets [3]. Several authors have therefore argued that this ‘motor’ model may extend to cerebellar nonmotor functions [3], [4] and [5], and that the cerebellum may support prediction in language processing [6]. However, this hypothesis has never been directly tested. Here, we used the ‘Visual World’ paradigm [7], where on-line processing of spoken sentence content can be assessed by recording the latencies of listeners' eye movements towards objects mentioned. Repetitive transcranial magnetic stimulation (rTMS) was used to disrupt function in the right cerebellum, a region implicated in language [8]. After cerebellar rTMS, listeners showed delayed eye fixations to target objects predicted by sentence content, while there was no effect on eye fixations in sentences without predictable content. The prediction deficit was absent in two control groups. Our findings support the hypothesis that computational operations performed by the cerebellum may support prediction during both motor control and language processing.

    Additional information

    Lesage_Suppl_Information.pdf
  • Lev-Ari, S., & Peperkamp, S. (2014). An experimental study of the role of social factors in sound change. Laboratory Phonology, 5(3), 379-401. doi:10.1515/lp-2014-0013.

    Abstract

    There is great variation in whether foreign sounds in loanwords are adapted or retained. Importantly, the retention of foreign sounds can lead to a sound change in the language. We propose that social factors influence the likelihood of loanword sound adaptation, and use this case to introduce a novel experimental paradigm for studying language change that captures the role of social factors. Specifically, we show that the relative prestige of the donor language in the loanword's semantic domain influences the rate of sound adaptation. We further show that speakers adapt to the performance of their ‘community’, and that this adaptation leads to the creation of a norm. The results of this study are thus the first to show an effect of social factors on loanword sound adaptation in an experimental setting. Moreover, they open up a new domain of experimentally studying language change in a manner that integrates social factors
  • Lev-Ari, S., & Keysar, B. (2012). Less detailed representation of non-native language: Why non-native speakers’ stories seem more vague. Discourse Processes, 49(7), 523-538. doi:10.1080/0163853X.2012.698493.

    Abstract

    The language of non-native speakers is less reliable than the language of native
    speakers in conveying the speaker’s intentions. We propose that listeners expect
    such reduced reliability and that this leads them to adjust the manner in which they
    process and represent non-native language by representing non-native language
    in less detail. Experiment 1 shows that when people listen to a story, they are
    less able to detect a word change with a non-native than with a native speaker.
    This suggests they represent the language of a non-native speaker with fewer
    details. Experiment 2 shows that, above a certain threshold, the higher participants’
    working memory is, the less they are able to detect the change with a non-native
    speaker. This suggests that adjustment to non-native speakers depends on working
    memory. This research has implications for the role of interpersonal expectations
    in the way people process language.
  • Lev-Ari, S., & Keysar, B. (2014). Executive control influences linguistic representations. Memory & Cognition, 42(2), 247-263. doi:10.3758/s13421-013-0352-3.

    Abstract

    Although it is known that words acquire their meanings partly from the contexts in which they are used, we proposed that the way in which words are processed can also influence their representation. We further propose that individual differences in the way that words are processed can consequently lead to individual differences in the way that they are represented. Specifically, we showed that executive control influences linguistic representations by influencing the coactivation of competing and reinforcing terms. Consequently, people with poorer executive control perceive the meanings of homonymous terms as being more similar to one another, and those of polysemous terms as being less similar to one another, than do people with better executive control. We also showed that bilinguals with poorer executive control experience greater cross-linguistic interference than do bilinguals with better executive control. These results have implications for theories of linguistic representation and language organization.
  • Lev-Ari, S., San Giacomo, M., & Peperkamp, S. (2014). The effect of domain prestige and interlocutors’ bilingualism on sound adaptation. Journal of Sociolinguistics, 18(5), 658-684. doi:10.1111/josl.12102.

    Abstract

    There is great variability in whether foreign sounds in loanwords are adapted, such that segments show cross-word and cross-situational variation in adaptation. Previous research proposed that word frequency, speakers' level of bilingualism and neighborhoods' level of bilingualism can explain such variability. We test for the effect of these factors and propose two additional factors: interlocutors' level of bilingualism and the prestige of the donor language in the loanword's domain. Analyzing elicited productions of loanwords from Spanish into Mexicano in a village where Spanish and Mexicano enjoy prestige in complementary domains, we show that interlocutors' bilingualism and prestige influence the rate of sound adaptation. Additionally, we find that speakers accommodate to their interlocutors, regardless of the interlocutors' level of bilingualism. As retention of foreign sounds can lead to sound change, these results show that social factors can influence changes in a language's sound system.
  • Lev-Ari, S., & Peperkamp, S. (2014). The influence of inhibitory skill on phonological representations in production and perception. Journal of Phonetics, 47, 36-46. doi:10.1016/j.wocn.2014.09.001.

    Abstract

    Inhibition is known to play a role in speech perception and has been hypothesized to likewise influence speech production. In this paper we test whether individual differences in inhibitory skill can lead to individual differences in phonological representations in perception and production. We further examine whether the type of inhibition that influences phonological representation is domain-specific or domain-general. Native French speakers read aloud sentences with words containing a voiced stop that either have a voicing neighbor (target) or not (control). The duration of pre-voicing was measured. Participants similarly performed a lexical decision task on versions of these target and matched control words whose pre-voicing duration was manipulated. Lastly, participants performed linguistic and non-linguistic inhibition tasks. Results indicate that the lower speakers' linguistic or non-linguistic inhibition is, the easier it is for them to recognize words with a voiceless neighbor when these words have a shorter, intermediate, pre-voicing rather than a longer one. Inhibitory skill did not predict recognition time for control words, indicating that the effect was due to the greater activation of the voiceless neighbor. Inhibition did not predict pre-voicing duration in production. These results indicate that individual differences in cognitive skills can influence phonological representations in speech perception.
  • Levelt, W. J. M. (1995). Hoezo 'neuro'? Hoezo 'linguïstisch'? Intermediair, 31(46), 32-37.
  • Levelt, W. J. M. (1967). Note on the distribution of dominance times in binocular rivalry. British Journal of Psychology, 58, 143-145.
  • Levelt, W. J. M. (1995). The ability to speak: From intentions to spoken words. European Review, 3(1), 13-23. doi:10.1017/S1062798700001290.

    Abstract

    In recent decades, psychologists have become increasingly interested in our ability to speak. This paper sketches the present theoretical perspective on this most complex skill of homo sapiens. The generation of fluent speech is based on the interaction of various processing components. These mechanisms are highly specialized, dedicated to performing specific subroutines, such as retrieving appropriate words, generating morpho-syntactic structure, computing the phonological target shape of syllables, words, phrases and whole utterances, and creating and executing articulatory programmes. As in any complex skill, there is a self-monitoring mechanism that checks the output. These component processes are targets of increasingly sophisticated experimental research, of which this paper presents a few salient examples.
  • Levelt, W. J. M. (1984). Sprache und Raum. Texten und Schreiben, 20, 18-21.
  • Levelt, W. J. M. (1979). On learnability: A reply to Lasnik and Chomsky. Unpublished manuscript.
  • Levinson, S. C. (2012). Authorship: Include all institutes in publishing index [Correspondence]. Nature, 485, 582. doi:10.1038/485582c.
  • Levinson, S. C. (1979). Activity types and language. Linguistics, 17, 365-399.
  • Levinson, S. C., & Majid, A. (2014). Differential ineffability and the senses. Mind & Language, 29, 407-427. doi:10.1111/mila.12057.

    Abstract

    neffability, the degree to which percepts or concepts resist linguistic coding, is a fairly unexplored nook of cognitive science. Although philosophical preoccupations with qualia or nonconceptual content certainly touch upon the area, there has been little systematic thought and hardly any empirical work in recent years on the subject. We argue that ineffability is an important domain for the cognitive sciences. For examining differential ineffability across the senses may be able to tell us important things about how the mind works, how different modalities talk to one another, and how language does, or does not, interact with other mental faculties.
  • Levinson, S. C. (2012). Kinship and human thought. Science, 336(6084), 988-989. doi:10.1126/science.1222691.

    Abstract

    Language and communication are central to shaping concepts such as kinship categories.
  • Levinson, S. C. (2014). Language and Wallace's problem [Review of the books More than nature needs: Language, mind and evolution by D. Bickerton and A natural history of human thinking by M. Tomasello]. Science, 344, 1458-1459. doi:10.1126/science.1252988.
  • Levinson, S. C. (1980). Speech act theory: The state of the art. Language teaching and linguistics: Abstracts, 5-24.

    Abstract

    Survey article
  • Levinson, S. C., & Holler, J. (2014). The origin of human multi-modal communication. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 2013030. doi:10.1098/rstb.2013.0302.

    Abstract

    One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins—especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the ‘gesture-first hypothesis’ with that of gesture and speech having evolved together, hand in hand—or hand in mouth, rather—as one system.
  • Levinson, S. C. (2012). The original sin of cognitive science. Topics in Cognitive Science, 4, 396-403. doi:10.1111/j.1756-8765.2012.01195.x.

    Abstract

    Classical cognitive science was launched on the premise that the architecture of human cognition is uniform and universal across the species. This premise is biologically impossible and is being actively undermined by, for example, imaging genomics. Anthropology (including archaeology, biological anthropology, linguistics, and cultural anthropology) is, in contrast, largely concerned with the diversification of human culture, language, and biology across time and space—it belongs fundamentally to the evolutionary sciences. The new cognitive sciences that will emerge from the interactions with the biological sciences will focus on variation and diversity, opening the door for rapprochement with anthropology.
  • Levinson, S. C., & Gray, R. D. (2012). Tools from evolutionary biology shed new light on the diversification of languages. Trends in Cognitive Sciences, 16(3), 167-173. doi:10.1016/j.tics.2012.01.007.

    Abstract

    Computational methods have revolutionized evolutionary biology. In this paper we explore the impact these methods are now having on our understanding of the forces that both affect the diversification of human languages and shape human cognition. We show how these methods can illuminate problems ranging from the nature of constraints on linguistic variation to the role that social processes play in determining the rate of linguistic change. Throughout the paper we argue that the cognitive sciences should move away from an idealized model of human cognition, to a more biologically realistic model where variation is central.
  • Levy, J., Hagoort, P., & Démonet, J.-F. (2014). A neuronal gamma oscillatory signature during morphological unification in the left occipitotemporal junction. Human Brain Mapping, 35, 5847-5860. doi:10.1002/hbm.22589.

    Abstract

    Morphology is the aspect of language concerned with the internal structure of words. In the past decades, a large body of masked priming (behavioral and neuroimaging) data has suggested that the visual word recognition system automatically decomposes any morphologically complex word into a stem and its constituent morphemes. Yet the reliance of morphology on other reading processes (e.g., orthography and semantics), as well as its underlying neuronal mechanisms are yet to be determined. In the current magnetoencephalography study, we addressed morphology from the perspective of the unification framework, that is, by applying the Hold/Release paradigm, morphological unification was simulated via the assembly of internal morphemic units into a whole word. Trials representing real words were divided into words with a transparent (true) or a nontransparent (pseudo) morphological relationship. Morphological unification of truly suffixed words was faster and more accurate and additionally enhanced induced oscillations in the narrow gamma band (60–85 Hz, 260–440 ms) in the left posterior occipitotemporal junction. This neural signature could not be explained by a mere automatic lexical processing (i.e., stem perception), but more likely it related to a semantic access step during the morphological unification process. By demonstrating the validity of unification at the morphological level, this study contributes to the vast empirical evidence on unification across other language processes. Furthermore, we point out that morphological unification relies on the retrieval of lexical semantic associations via induced gamma band oscillations in a cerebral hub region for visual word form processing.
  • Lewis, A., Freeman-Mills, L., de la Calle-Mustienes, E., Giráldez-Pérez, R. M., Davis, H., Jaeger, E., Becker, M., Hubner, N. C., Nguyen, L. N., Zeron-Medina, J., Bond, G., Stunnenberg, H. G., Carvajal, J. J., Gomez-Skarmeta, J. L., Leedham, S., & Tomlinson, I. (2014). A polymorphic enhancer near GREM1 influences bowel cancer risk through diifferential CDX2 and TCF7L2 binding. Cell Reports, 8(4), Pages 983-990. doi:10.1016/j.celrep.2014.07.020.

    Abstract

    A rare germline duplication upstream of the bone morphogenetic protein antagonist GREM1 causes a Mendelian-dominant predisposition to colorectal cancer (CRC). The underlying disease mechanism is strong, ectopic GREM1 overexpression in the intestinal epithelium. Here, we confirm that a common GREM1 polymorphism, rs16969681, is also associated with CRC susceptibility, conferring ∼20% differential risk in the general population. We hypothesized the underlying cause to be moderate differences in GREM1 expression. We showed that rs16969681 lies in a region of active chromatin with allele- and tissue-specific enhancer activity. The CRC high-risk allele was associated with stronger gene expression, and higher Grem1 mRNA levels increased the intestinal tumor burden in ApcMin mice. The intestine-specific transcription factor CDX2 and Wnt effector TCF7L2 bound near rs16969681, with significantly higher affinity for the risk allele, and CDX2 overexpression in CDX2/GREM1-negative cells caused re-expression of GREM1. rs16969681 influences CRC risk through effects on Wnt-driven GREM1 expression in colorectal tumors.
  • Liebal, K., & Haun, D. B. M. (2012). The importance of comparative psychology for developmental science [Review Article]. International Journal of Developmental Science, 6, 21-23. doi:10.3233/DEV-2012-11088.

    Abstract

    The aim of this essay is to elucidate the relevance of cross-species comparisons for the investigation of human behavior and its development. The focus is on the comparison of human children and another group of primates, the non-human great apes, with special attention to their cognitive skills. Integrating a comparative and developmental perspective, we argue, can provide additional answers to central and elusive questions about human behavior in general and its development in particular: What are the heritable predispositions of the human mind? What cognitive traits are uniquely human? In this sense, Developmental Science would benefit from results of Comparative Psychology.
  • Linkenauger, S. A., Lerner, M. D., Ramenzoni, V. C., & Proffitt, D. R. (2012). A perceptual-motor deficit predicts social and communicative impairments in individuals with autism spectrum disorders. Autism Research, 5, 352-362. doi:10.1002/aur.1248.

    Abstract

    Individuals with autism spectrum disorders (ASDs) have known impairments in social and motor skills. Identifying putative underlying mechanisms of these impairments could lead to improved understanding of the etiology of core social/communicative deficits in ASDs, and identification of novel intervention targets. The ability to perceptually integrate one's physical capacities with one's environment (affordance perception) may be such a mechanism. This ability has been theorized to be impaired in ASDs, but this question has never been directly tested. Crucially, affordance perception has shown to be amenable to learning; thus, if it is implicated in deficits in ASDs, it may be a valuable unexplored intervention target. The present study compared affordance perception in adolescents and adults with ASDs to typically developing (TD) controls. Two groups of individuals (adolescents and adults) with ASDs and age-matched TD controls completed well-established action capability estimation tasks (reachability, graspability, and aperture passability). Their caregivers completed a measure of their lifetime social/communicative deficits. Compared with controls, individuals with ASDs showed unprecedented gross impairments in relating information about their bodies' action capabilities to visual information specifying the environment. The magnitude of these deficits strongly predicted the magnitude of social/communicative impairments in individuals with ASDs. Thus, social/communicative impairments in ASDs may derive, at least in part, from deficits in basic perceptual–motor processes (e.g. action capability estimation). Such deficits may impair the ability to maintain and calibrate the relationship between oneself and one's social and physical environments, and present fruitful, novel, and unexplored target for intervention.
  • Liszkowski, U., Brown, P., Callaghan, T., Takada, A., & De Vos, C. (2012). A prelinguistic gestural universal of human communication. Cognitive Science, 36, 698-713. doi:10.1111/j.1551-6709.2011.01228.x.

    Abstract

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures around the world to test for the existence of preverbal pointing in infants and their caregivers. Results were that by 10–14 months of age, infants and their caregivers pointed in all cultures in the same basic situation with similar frequencies and the same proto-typical morphology of the extended index finger. Infants’ pointing was best predicted by age and caregiver pointing, but not by cultural group. Further analyses revealed a strong relation between the temporal unfolding of caregivers’ and infants’ pointing events, uncovering a structure of early prelinguistic gestural conversation. Findings support the existence of a gestural, language-independent universal of human communication that forms a culturally shared, prelinguistic basis for diversified linguistic communication.
  • Liszkowski, U. (2014). Two sources of meaning in infant communication: Preceding action contexts and act-accompanying characteristics. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 20130294. doi:10.1098/rstb.2013.0294.
  • Littauer, R., Roberts, S. G., Winters, J., Bailes, R., Pleyer, M., & Little, H. (2014). From the savannah to the cloud: Blogging evolutionary linguistics research. The Past, Present and Future of Language Evolution Research: Student Volume of the 9th International Conference on the Evolution of Language, 121-133.

    Abstract

    Over the last thirty years, evolutionary linguistics has grown as a data-driven, interdisciplinary field and received accelerated interest due to its adoption of modern research methodologies. This growth is dependant upon the methods used to both disseminate and foster discussion of research by the larger academic community. We argue that the internet is increasingly being used as an efficient means of finding and presenting research. The traditional journal format for disseminating knowledge was well-designed within the confines of print publication. With the tools afforded to us by technology and the internet, the evolutionary linguistics research community is able to compensate for the necessary shortcomings of the journal format. We evaluate examples of how research blogging has aided language scientists. We review the state of the field for online, real-time academic debate, by covering particular instances of post- publication review and their reaction. We conclude by considering how evolutionary linguistics as a field can potentially benefit from using the internet
  • Liu, C., Kong, X., Liu, X., Zhou, R., & Wu, B. (2014). Long-term total sleep deprivation reduces thalamic gray matter volume in healthy men. NeuroReport, 25(5), 320-323. doi:10.1097/WNR.0000000000000091.

    Abstract

    Sleep loss can alter extrinsic, task-related functional MRI signals involved in attention, memory, and executive function. However, the effects of sleep loss on brain structure have not been well characterized. Recent studies with patients with sleep disorders and animal models have demonstrated reduction of regional brain structure in the hippocampus and thalamus. In this study, using T1-weighted MRI, we examined the change of regional gray matter volume in healthy adults after long-term total sleep deprivation (∼72 h). Regional volume changes were explored using voxel-based morphometry with a paired two-sample t-test. The results revealed significant loss of gray matter volume in the thalamus but not in the hippocampus. No overall decrease in whole brain gray matter volume was noted after sleep deprivation. As expected, sleep deprivation significantly reduced visual vigilance as assessed by the continuous performance test, and this decrease was correlated significantly with reduced regional gray matter volume in thalamic regions. This study provides the first evidence for sleep loss-related changes in gray matter in the healthy adult brain.
  • Lohmann, A., & Takada, T. (2014). Order in NP conjuncts in spoken English and Japanese. Lingua, 152, 48-64. doi:10.1016/j.lingua.2014.09.011.

    Abstract

    In the emerging field of cross-linguistic studies on language production, one particularly interesting line of inquiry is possible differences between English and Japanese in ordering words and phrases. Previous research gives rise to the idea that there is a difference in accessing meaning versus form during linearization between these two languages. This assumption is based on observations of language-specific effects of the length factor on the order of phrases (short-before-long in English, long-before-short in Japanese). We contribute to the cross-linguistic exploration of such differences by investigating the variables underlying the internal order of NP conjuncts in spoken English and Japanese. Our quantitative analysis shows that similar influences underlie the ordering process across the two languages. Thus we do not find evidence for the aforementioned difference in accessing meaning versus form with this syntactic phenomenon. With regard to length, Japanese also exhibits a short-before-long preference. However, this tendency is significantly weaker in Japanese than in English, which we explain through an attenuating influence of the typical Japanese phrase structure pattern on the universal effect of short phrases being more accessible. We propose that a similar interaction between entrenched long-before-short schemas and universal accessibility effects is responsible for the varying effects of length in Japanese.
  • Ludwig, A., Vernesi, C., Lieckfeldt, D., Lattenkamp, E. Z., Wiethölter, A., & Lutz, W. (2012). Origin and patterns of genetic diversity of German fallow deer as inferred from mitochondrial DNA. European Journal of Wildlife Research, 58(2), 495-501. doi:10.1007/s10344-011-0571-5.

    Abstract

    Although not native to Germany, fallow deer (Dama dama) are commonly found today, but their origin as well as the genetic structure of the founding members is still unclear. In order to address these aspects, we sequenced ~400 bp of the mitochondrial d-loop of 365 animals from 22 locations in nine German Federal States. Nine new haplotypes were detected and archived in GenBank. Our data produced evidence for a Turkish origin of the German founders. However, German fallow deer populations have complex patterns of mtDNA variation. In particular, three distinct clusters were identified: Schleswig-Holstein, Brandenburg/Hesse/Rhineland and Saxony/lower Saxony/Mecklenburg/Westphalia/Anhalt. Signatures of recent demographic expansions were found for the latter two. An overall pattern of reduced genetic variation was therefore accompanied by a relatively strong genetic structure, as highlighted by an overall Phict value of 0.74 (P < 0.001).
  • Lum, J. A., & Kidd, E. (2012). An examination of the associations among multiple memory systems, past tense, and vocabulary in typically developing 5-year-old children. Journal of Speech, Language, and Hearing Research, 55(4), 989-1006. doi:10.1044/1092-4388(2011/10-0137).
  • Lüttjohann, A., Schoffelen, J.-M., & Van Luijtelaar, G. (2014). Termination of ongoing spike-wave discharges investigated by cortico-thalamic network analyses. Neurobiology of Disease, 70, 127-137. doi:10.1016/j.nbd.2014.06.007.

    Abstract

    Purpose While decades of research were devoted to study generation mechanisms of spontaneous spike and wave discharges (SWD), little attention has been paid to network mechanisms associated with the spontaneous termination of SWD. In the current study coupling-dynamics at the onset and termination of SWD were studied in an extended part of the cortico-thalamo-cortical system of freely moving, genetic absence epileptic WAG/Rij rats. Methods Local-field potential recordings of 16 male WAG/Rij rats, equipped with multiple electrodes targeting layer 4 to 6 of the somatosensory-cortex (ctx4, ctx5, ctx6), rostral and caudal reticular thalamic nucleus (rRTN & cRTN), Ventral Postero Medial (VPM), anterior- (ATN) and posterior (Po) thalamic nucleus, were obtained. Six seconds lasting pre-SWD->SWD, SWD->post SWD and control periods were analyzed with time-frequency methods and between-region interactions were quantified with frequencyresolved Granger Causality (GC) analysis. Results Most channel-pairs showed increases in GC lasting from onset to offset of the SWD. While for most thalamo-thalamic pairs a dominant coupling direction was found during the complete SWD, most cortico-thalamic pairs only showed a dominant directional drive (always from cortex to thalamus) during the first 500ms of SWD. Channel-pair ctx4-rRTN showed a longer lasting dominant cortical drive, which stopped 1.5 sec prior to SWD offset. This early decrease in directional coupling was followed by an increase in directional coupling from cRTN to rRTN 1 sec prior to SWD offset. For channel pairs ctx5-Po and ctx6-Po the heightened cortex->thalamus coupling remained until 1.5 sec following SWD offset, while the thalamus->cortex coupling for these pairs stopped at SWD offset. Conclusion The high directional coupling from somatosensory cortex to the thalamus at SWD onset is in good agreement with the idea of a cortical epileptic focus that initiates and entrains other brain structures into seizure activity. The decrease of cortex to rRTN coupling as well as the increased coupling from cRTN to rRTN preceding SWD termination demonstrate that SWD termination is a gradual process that involves both cortico-thalamic as well as intrathalamic processes. The rostral RTN seems to be an important resonator for SWD and relevant for maintenance, while the cRTN might inhibit this oscillation. The somatosensory cortex seems to attempt to reinitiate SWD following its offset via its strong coupling to the posterior thalamus.
  • MacLean, E. L., Matthews, L. J., Hare, B. A., Nunn, C. L., Anderson, R. C., Aureli, F., Brannon, E. M., Call, J., Drea, C. M., Emery, N. J., Haun, D. B. M., Herrmann, E., Jacobs, L. F., Platt, M. L., Rosati, A. G., Sandel, A. A., Schroepfer, K. K., Seed, A. M., Tan, J., Van Schaik, C. P. and 1 moreMacLean, E. L., Matthews, L. J., Hare, B. A., Nunn, C. L., Anderson, R. C., Aureli, F., Brannon, E. M., Call, J., Drea, C. M., Emery, N. J., Haun, D. B. M., Herrmann, E., Jacobs, L. F., Platt, M. L., Rosati, A. G., Sandel, A. A., Schroepfer, K. K., Seed, A. M., Tan, J., Van Schaik, C. P., & Wobber, V. (2012). How does cognition evolve? Phylogenetic comparative psychology. Animal Cognition, 15, 223-238. doi:10.1007/s10071-011-0448-8.

    Abstract

    Now more than ever animal studies have the potential to test hypotheses regarding how cognition evolves. Comparative psychologists have developed new techniques to probe the cognitive mechanisms underlying animal behavior, and they have become increasingly skillful at adapting methodologies to test multiple species. Meanwhile, evolutionary biologists have generated quantitative approaches to investigate the phylogenetic distribution and function of phenotypic traits, including cognition. In particular, phylogenetic methods can quantitatively (1) test whether specific cognitive abilities are correlated with life history (e.g., lifespan), morphology (e.g., brain size), or socio-ecological variables (e.g., social system), (2) measure how strongly phylogenetic relatedness predicts the distribution of cognitive skills across species, and (3) estimate the ancestral state of a given cognitive trait using measures of cognitive performance from extant species. Phylogenetic methods can also be used to guide the selection of species comparisons that offer the strongest tests of a priori predictions of cognitive evolutionary hypotheses (i.e., phylogenetic targeting). Here, we explain how an integration of comparative psychology and evolutionary biology will answer a host of questions regarding the phylogenetic distribution and history of cognitive traits, as well as the evolutionary processes that drove their evolution.
  • Magi, A., Tattini, L., Palombo, F., Benelli, M., Gialluisi, A., Giusti, B., Abbate, R., Seri, M., Gensini, G. F., Romeo, G., & Pippucci, T. (2014). H3M2: Detection of runs of homozygosity from whole-exome sequencing data. Bioinformatics, 2852-2859. doi:10.1093/bioinformatics/btu401.

    Abstract

    Motivation: Runs of homozygosity (ROH) are sizable chromosomal stretches of homozygous genotypes, ranging in length from tens of kilobases to megabases. ROHs can be relevant for population and medical genetics, playing a role in predisposition to both rare and common disorders. ROHs are commonly detected by single nucleotide polymorphism (SNP) microarrays, but attempts have been made to use whole-exome sequencing (WES) data. Currently available methods developed for the analysis of uniformly spaced SNP-array maps do not fit easily to the analysis of the sparse and non-uniform distribution of the WES target design. Results: To meet the need of an approach specifically tailored to WES data, we developed (HM2)-M-3, an original algorithm based on heterogeneous hidden Markov model that incorporates inter-marker distances to detect ROH from WES data. We evaluated the performance of H-3 M-2 to correctly identify ROHs on synthetic chromosomes and examined its accuracy in detecting ROHs of different length (short, medium and long) from real 1000 genomes project data. H3M2 turned out to be more accurate than GERMLINE and PLINK, two state-of-the-art algorithms, especially in the detection of short and medium ROHs
  • Magyari, L., Bastiaansen, M. C. M., De Ruiter, J. P., & Levinson, S. C. (2014). Early anticipation lies behind the speed of response in conversation. Journal of Cognitive Neuroscience, 26(11), 2530-2539. doi:10.1162/jocn_a_00673.

    Abstract

    RTs in conversation, with average gaps of 200 msec and often less, beat standard RTs, despite the complexity of response and the lag in speech production (600 msec or more). This can only be achieved by anticipation of timing and content of turns in conversation, about which little is known. Using EEG and an experimental task with conversational stimuli, we show that estimation of turn durations are based on anticipating the way the turn would be completed. We found a neuronal correlate of turn-end anticipation localized in ACC and inferior parietal lobule, namely a beta-frequency desynchronization as early as 1250 msec, before the end of the turn. We suggest that anticipation of the other's utterance leads to accurately timed transitions in everyday conversations.
  • Magyari, L., & De Ruiter, J. P. (2012). Prediction of turn-ends based on anticipation of upcoming words. Frontiers in Psychology, 3, 376. doi:10.3389/fpsyg.2012.00376.

    Abstract

    During conversation listeners have to perform several tasks simultaneously. They have to comprehend their interlocutor’s turn, while also having to prepare their own next turn. Moreover, a careful analysis of the timing of natural conversation reveals that next speakers also time their turns very precisely. This is possible only if listeners can predict accurately when the speaker’s turn is going to end. But how are people able to predict when a turn-ends? We propose that people know when a turn-ends, because they know how it ends. We conducted a gating study to examine if better turn-end predictions coincide with more accurate anticipation of the last words of a turn. We used turns from an earlier button-press experiment where people had to press a button exactly when a turn-ended. We show that the proportion of correct guesses in our experiment is higher when a turn’s end was estimated better in time in the button-press experiment. When people were too late in their anticipation in the button-press experiment, they also anticipated more words in our gating study. We conclude that people made predictions in advance about the upcoming content of a turn and used this prediction to estimate the duration of the turn. We suggest an economical model of turn-end anticipation that is based on anticipation of words and syntactic frames in comprehension.
  • Majid, A. (2012). Current emotion research in the language sciences. Emotion Review, 4, 432-443. doi:10.1177/1754073912445827.

    Abstract

    When researchers think about the interaction between language and emotion, they typically focus on descriptive emotion words. This review demonstrates that emotion can interact with language at many levels of structure, from the sound patterns of a language to its lexicon and grammar, and beyond to how it appears in conversation and discourse. Findings are considered from diverse subfields across the language sciences, including cognitive linguistics, psycholinguistics, linguistic anthropology, and conversation analysis. Taken together, it is clear that emotional expression is finely tuned to language-specific structures. Future emotion research can better exploit cross-linguistic variation to unravel possible universal principles operating between language and emotion.
  • Majid, A., & Burenhult, N. (2014). Odors are expressible in language, as long as you speak the right language. Cognition, 130(2), 266-270. doi:10.1016/j.cognition.2013.11.004.

    Abstract

    From Plato to Pinker there has been the common belief that the experience of a smell is impossible to put into words. Decades of studies have confirmed this observation. But the studies to date have focused on participants from urbanized Western societies. Cross-cultural research suggests that there may be other cultures where odors play a larger role. The Jahai of the Malay Peninsula are one such group. We tested whether Jahai speakers could name smells as easily as colors in comparison to a matched English group. Using a free naming task we show on three different measures that Jahai speakers find it as easy to name odors as colors, whereas English speakers struggle with odor naming. Our findings show that the long-held assumption that people are bad at naming smells is not universally true. Odors are expressible in language, as long as you speak the right language.
  • Majid, A. (2012). The role of language in a science of emotion [Comment]. Emotion review, 4, 380-381. doi:10.1177/1754073912445819.

    Abstract

    Emotion scientists often take an ambivalent stance concerning the role of language in a science of emotion. However, it is important for emotion researchers to contemplate some of the consequences of current practices
    for their theory building. There is a danger of an overreliance on the English language as a transparent window into emotion categories. More consideration has to be given to cross-linguistic comparison in the future so that models of language acquisition and of the language–cognition interface fit better the extant variation found in today’s peoples.
  • Malt, B. C., Ameel, E., Imai, M., Gennari, S., Saji, N., & Majid, A. (2014). Human locomotion in languages: Constraints on moving and meaning. Journal of Memory and Language, 74, 107-123. doi:10.1016/j.jml.2013.08.003.

    Abstract

    The distinctions between red and yellow or arm and hand may seem self-evident to English speakers, but they are not: Languages differ in the named distinctions they make. To help understand what constrains word meaning and how variation arises, we examined name choices in English, Dutch, Spanish, and Japanese for 36 instances of human locomotion. Naming patterns showed commonalities largely interpretable in terms of perceived physical similarities among the instances. There was no evidence for languages jointly ignoring salient physical distinctions to build meaning on other bases, nor for a shift in the basis of word meanings between parts of the domain of more vs. less importance to everyday life. Overall, the languages differed most notably in how many named distinctions they made, a form of variation that may be linked to linguistic typology. These findings, considered along with naming patterns from other domains, suggest recurring principles of constraint and variation across domains.
  • Mani, N., & Huettig, F. (2012). Prediction during language processing is a piece of cake - but only for skilled producers. Journal of Experimental Psychology: Human Perception and Performance, 38(4), 843-847. doi:10.1037/a0029284.

    Abstract

    Are there individual differences in children’s prediction of upcoming linguistic input and what do these differences reflect? Using a variant of the preferential looking paradigm (Golinkoff et al., 1987), we found that, upon hearing a sentence like “The boy eats a big cake”, two-year-olds fixate edible objects in a visual scene (a cake) soon after they hear the semantically constraining verb, eats, and prior to hearing the word, cake. Importantly, children’s prediction skills were significantly correlated with their productive vocabulary size – Skilled producers (i.e., children with large production vocabularies) showed evidence of predicting upcoming linguistic input while low producers did not. Furthermore, we found that children’s prediction ability is tied specifically to their production skills and not to their comprehension skills. Prediction is really a piece of cake, but only for skilled producers.
  • Mani, N., & Huettig, F. (2014). Word reading skill predicts anticipation of upcoming spoken language input: A study of children developing proficiency in reading. Journal of Experimental Child Psychology, 126, 264-279. doi:10.1016/j.jecp.2014.05.004.

    Abstract

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants’ literacy skills. Against this background, the current study takes a look at the role of word reading skill in listener’s anticipation of upcoming spoken language input in children at the cusp of learning to read: if reading skills impact predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-old children on their prediction of upcoming spoken language input in an eye-tracking task. While children, like in previous studies to-date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children’s word reading (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition) skills and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations which in turn also supports anticipation of upcoming spoken words.
  • Martin, A. E., Nieuwland, M. S., & Carrieras, M. (2014). Agreement attraction during comprehension of grammatical sentences: ERP evidence from ellipsis. Brain and Language, 135, 42-51. doi:10.1016/j.bandl.2014.05.001.

    Abstract

    Successful dependency resolution during language comprehension relies on accessing certain representations in memory, and not others. We recently reported event-related potential (ERP) evidence that syntactically unavailable, intervening attractor-nouns interfered during comprehension of Spanish noun-phrase ellipsis (the determiner otra/otro): grammatically correct determiners that mismatched the gender of attractor-nouns elicited a sustained negativity as also observed for incorrect determiners (Martin, Nieuwland, & Carreiras, 2012). The current study sought to extend this novel finding in sentences containing object-extracted relative clauses, where the antecedent may be less prominent. Whereas correct determiners that matched the gender of attractor-nouns now elicited an early anterior negativity as also observed for mismatching determiners, the previously reported interaction pattern was replicated in P600 responses to subsequent words. Our results suggest that structural and gender information is simultaneously taken into account, providing further evidence for retrieval interference during comprehension of grammatical sentences.
  • Martin, A. E., Nieuwland, M. S., & Carreiras, M. (2012). Event-related brain potentials index cue-based retrieval interference during sentence comprehension. NeuroImage, 59(2), 1859-1869. doi:10.1016/j.neuroimage.2011.08.057.

    Abstract

    Successful language use requires access to products of past processing within an evolving discourse. A central issue for any neurocognitive theory of language then concerns the role of memory variables during language processing. Under a cue-based retrieval account of language comprehension, linguistic dependency resolution (e.g., retrieving antecedents) is subject to interference from other information in the sentence, especially information that occurs between the words that form the dependency (e.g., between the antecedent and the retrieval site). Retrieval interference may then shape processing complexity as a function of the match of the information at retrieval with the antecedent versus other recent or similar items in memory. To address these issues, we studied the online processing of ellipsis in Castilian Spanish, a language with morphological gender agreement. We recorded event-related brain potentials while participants read sentences containing noun-phrase ellipsis indicated by the determiner otro/a (‘another’). These determiners had a grammatically correct or incorrect gender with respect to their antecedent nouns that occurred earlier in the sentence. Moreover, between each antecedent and determiner, another noun phrase occurred that was structurally unavailable as an antecedent and that matched or mismatched the gender of the antecedent (i.e., a local agreement attractor). In contrast to extant P600 results on agreement violation processing, and inconsistent with predictions from neurocognitive models of sentence processing, grammatically incorrect determiners evoked a sustained, broadly distributed negativity compared to correct ones between 400 and 1000 ms after word onset, possibly related to sustained negativities as observed for referential processing difficulties. Crucially, this effect was modulated by the attractor: an increased negativity was observed for grammatically correct determiners that did not match the gender of the attractor, suggesting that structurally unavailable noun phrases were at least temporarily considered for grammatically correct ellipsis. These results constitute the first ERP evidence for cue-based retrieval interference during comprehension of grammatical sentences.
  • Matic, D., & Nikolaeva, I. (2014). Realis mood, focus, and existential closure in Tundra Yukaghir. Lingua, 150, 202-231. doi:10.1016/j.lingua.2014.07.016.

    Abstract

    The nature and the typological validity of the categories ‘realis’ and ‘irrealis’ has been a matter of intensive debate. In this paper we analyse the realis/irrealis dichotomy in Tundra Yukaghir (isolate, north-eastern Siberia), and show that in this language realis is associated with a meaningful contribution, namely, existential quantification over events. This contribution must be expressed overtly by a combination of syntactic and prosodic means. Irrealis is the default category: the clause is interpreted as irrealis in the absence of the marker of realis. This implies that the relevant typological question may turn out to be the semantics of realis, rather than irrealis. We further argue that the Tundra Yukaghir realis is a hybrid category composed of elements from different domains (information structure, lexical semantics, and quantification) unified at the level of interpretation via pragmatic enrichment. The concept of notional mood must therefore be expanded to include moods which come about in interpretation and do not constitute a discrete denotation.
  • Matić, D. (2012). Review of: Assertion by Mark Jary, Palgrave Macmillan, 2010 [Web Post]. The LINGUIST List. Retrieved from http://linguistlist.org/pubs/reviews/get-review.cfm?SubID=4547242.

    Abstract

    Even though assertion has held centre stage in much philosophical and linguistic theorising on language, Mark Jary’s ‘Assertion’ represents the first book-length treatment of the topic. The content of the book is aptly described by the author himself: ''This book has two aims. One is to bring together and discuss in a systematic way a range of perspectives on assertion: philosophical, linguistic and psychological. [...] The other is to present a view of the pragmatics of assertion, with particular emphasis on the contribution of the declarative mood to the process of utterance interpretation.'' (p. 1). The promise contained in this introductory note is to a large extent fulfilled: the first seven chapters of the book discuss many of the relevant philosophical and linguistic approaches to assertion and at the same time provide the background for the presentation of Jary's own view on the pragmatics of declaratives, presented in the last (and longest) chapter.
  • Mattys, S. L., & Scharenborg, O. (2014). Phoneme categorization and discrimination in younger and older adults: A comparative analysis of perceptual, lexical, and attentional factors. Psychology and Aging, 29(1), 150-162. doi:10.1037/a0035387.

    Abstract

    This study investigates the extent to which age-related language processing difficulties are due to a decline in sensory processes or to a deterioration of cognitive factors, specifically, attentional control. Two facets of attentional control were examined: inhibition of irrelevant information and divided attention. Younger and older adults were asked to categorize the initial phoneme of spoken syllables (“Was it m or n?”), trying to ignore the lexical status of the syllables. The phonemes were manipulated to range in eight steps from m to n. Participants also did a discrimination task on syllable pairs (“Were the initial sounds the same or different?”). Categorization and discrimination were performed under either divided attention (concurrent visual-search task) or focused attention (no visual task). The results showed that even when the younger and older adults were matched on their discrimination scores: (1) the older adults had more difficulty inhibiting lexical knowledge than did younger adults, (2) divided attention weakened lexical inhibition in both younger and older adults, and (3) divided attention impaired sound discrimination more in older than younger listeners. The results confirm the independent and combined contribution of sensory decline and deficit in attentional control to language processing difficulties associated with aging. The relative weight of these variables and their mechanisms of action are discussed in the context of theories of aging and language.
  • Mazuka, R., Hasegawa, M., & Tsuji, S. (2014). Development of non-native vowel discrimination: Improvement without exposure. Developmental Psychobiology, 56(2), 192-209. doi:10.1002/dev.21193.

    Abstract

    he present study tested Japanese 4.5- and 10-month old infants' ability to discriminate three German vowel pairs, none of which are contrastive in Japanese, using a visual habituation–dishabituation paradigm. Japanese adults' discrimination of the same pairs was also tested. The results revealed that Japanese 4.5-month old infants discriminated the German /bu:k/-/by:k/ contrast, but they showed no evidence of discriminating the /bi:k/-/be:k/ or /bu:k/-/bo:k/ contrasts. Japanese 10-month old infants, on the other hand, discriminated the German /bi:k/-/be:k/ contrast, while they showed no evidence of discriminating the /bu:k/-/by:k/ or /bu:k/-/bo:k/ contrasts. Japanese adults, in contrast, were highly accurate in their discrimination of all of the pairs. The results indicate that discrimination of non-native contrasts is not always easy even for young infants, and that their ability to discriminate non-native contrasts can improve with age even when they receive no exposure to a language in which the given contrast is phonemic. © 2013 Wiley Periodicals, Inc. Dev Psychobiol 56: 192–209, 2014.
  • McQueen, J. M., & Huettig, F. (2012). Changing only the probability that spoken words will be distorted changes how they are recognized. Journal of the Acoustical Society of America, 131(1), 509-517. doi:10.1121/1.3664087.

    Abstract

    An eye-tracking experiment examined contextual flexibility in speech processing in response to distortions in spoken input. Dutch participants heard Dutch sentences containing critical words and saw four-picture displays. The name of one picture either had the same onset phonemes as the critical word or had a different first phoneme and rhymed. Participants fixated onset-overlap more than rhyme-overlap pictures, but this tendency varied with speech quality. Relative to a baseline with noise-free sentences, participants looked less at onset-overlap and more at rhyme-overlap pictures when phonemes in the sentences (but not in the critical words) were replaced by noises like those heard on a badly-tuned AM radio. The position of the noises (word-initial or word-medial) had no effect. Noises elsewhere in the sentences apparently made evidence about the critical word less reliable: Listeners became less confident of having heard the onset-overlap name but also less sure of having not heard the rhyme-overlap name. The same acoustic information has different effects on spoken-word recognition as the probability of distortion changes.
  • McQueen, J. M., & Huettig, F. (2014). Interference of spoken word recognition through phonological priming from visual objects and printed words. Attention, Perception & Psychophysics, 76, 190-200. doi:10.3758/s13414-013-0560-8.

    Abstract

    Three cross-modal priming experiments examined the influence of pre-exposure to
    pictures and printed words on the speed of spoken word recognition. Targets for
    auditory lexical decision were spoken Dutch words and nonwords, presented in
    isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory
    stimuli were preceded by primes which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime-target pairs were phonologically onsetrelated (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijlzwaard, arrow-sword), or were unrelated on both dimensions. Phonological
    interference and semantic facilitation were observed in all experiments. Priming
    magnitude was similar for pictures and printed words, and did not vary with picture
    viewing time or number of pictures in the display (either one or four). These effects
    arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision-making. This suggests
    that, by default, processing of related pictures and printed words influences how
    quickly we recognize related spoken words.
  • McQueen, J. M., Cutler, A., Briscoe, T., & Norris, D. (1995). Models of continuous speech recognition and the contents of the vocabulary. Language and Cognitive Processes, 10, 309-331. doi:10.1080/01690969508407098.

    Abstract

    Several models of spoken word recognition postulate that recognition is achieved via a process of competition between lexical hypotheses. Competition not only provides a mechanism for isolated word recognition, it also assists in continuous speech recognition, since it offers a means of segmenting continuous input into individual words. We present statistics on the pattern of occurrence of words embedded in the polysyllabic words of the English vocabulary, showing that an overwhelming majority (84%) of polysyllables have shorter words embedded within them. Positional analyses show that these embeddings are most common at the onsets of the longer word. Although both phonological and syntactic constraints could rule out some embedded words, they do not remove the problem. Lexical competition provides a means of dealing with lexical embedding. It is also supported by a growing body of experimental evidence. We present results which indicate that competition operates both between word candidates that begin at the same point in the input and candidates that begin at different points (McQueen, Norris, & Cutler, 1994, Noms, McQueen, & Cutler, in press). We conclude that lexical competition is an essential component in models of continuous speech recognition.
  • McQueen, J. M., Tyler, M., & Cutler, A. (2012). Lexical retuning of children’s speech perception: Evidence for knowledge about words’ component sounds. Language Learning and Development, 8, 317-339. doi:10.1080/15475441.2011.641887.

    Abstract

    Children hear new words from many different talkers; to learn words most efficiently, they should be able to represent them independently of talker-specific pronunciation detail. However, do children know what the component sounds of words should be, and can they use that knowledge to deal with different talkers' phonetic realizations? Experiment 1 replicated prior studies on lexically guided retuning of speech perception in adults, with a picture-verification methodology suitable for children. One participant group heard an ambiguous fricative ([s/f]) replacing /f/ (e.g., in words like giraffe); another group heard [s/f] replacing /s/ (e.g., in platypus). The first group subsequently identified more tokens on a Simpie-[s/f]impie-Fimpie toy-name continuum as Fimpie. Experiments 2 and 3 found equivalent lexically guided retuning effects in 12- and 6-year-olds. Children aged 6 have all that is needed for adjusting to talker variation in speech: detailed and abstract phonological representations and the ability to apply them during spoken-word recognition.

    Files private

    Request files
  • Mellem, M. S., Bastiaansen, M. C. M., Pilgrim, L. K., Medvedev, A. V., & Friedman, R. B. (2012). Word class and context affect alpha-band oscillatory dynamics in an older population. Frontiers in Psychology, 3, 97. doi:10.3389/fpsyg.2012.00097.

    Abstract

    Differences in the oscillatory EEG dynamics of reading open class (OC) and closed class (CC) words have previously been found (Bastiaansen et al., 2005) and are thought to reflect differences in lexical-semantic content between these word classes. In particular, the theta-band (4–7 Hz) seems to play a prominent role in lexical-semantic retrieval. We tested whether this theta effect is robust in an older population of subjects. Additionally, we examined how the context of a word can modulate the oscillatory dynamics underlying retrieval for the two different classes of words. Older participants (mean age 55) read words presented in either syntactically correct sentences or in a scrambled order (“scrambled sentence”) while their EEG was recorded. We performed time–frequency analysis to examine how power varied based on the context or class of the word. We observed larger power decreases in the alpha (8–12 Hz) band between 200–700 ms for the OC compared to CC words, but this was true only for the scrambled sentence context. We did not observe differences in theta power between these conditions. Context exerted an effect on the alpha and low beta (13–18 Hz) bands between 0 and 700 ms. These results suggest that the previously observed word class effects on theta power changes in a younger participant sample do not seem to be a robust effect in this older population. Though this is an indirect comparison between studies, it may suggest the existence of aging effects on word retrieval dynamics for different populations. Additionally, the interaction between word class and context suggests that word retrieval mechanisms interact with sentence-level comprehension mechanisms in the alpha-band.
  • Menenti, L., Petersson, K. M., & Hagoort, P. (2012). From reference to sense: How the brain encodes meaning for speaking. Frontiers in Psychology, 2, 384. doi:10.3389/fpsyg.2011.00384.

    Abstract

    In speaking, semantic encoding is the conversion of a non-verbal mental representation (the reference) into a semantic structure suitable for expression (the sense). In this fMRI study on sentence production we investigate how the speaking brain accomplishes this transition from non-verbal to verbal representations. In an overt picture description task, we manipulated repetition of sense (the semantic structure of the sentence) and reference (the described situation) separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these two components of semantic encoding. We also performed a control experiment with the same stimuli and design but without any linguistic task to identify areas involved in perception of the stimuli per se. The bilateral inferior parietal lobes were selectively sensitive to repetition of reference, while left inferior frontal gyrus showed selective suppression to repetition of sense. Strikingly, a widespread network of areas associated with language processing (left middle frontal gyrus, bilateral superior parietal lobes and bilateral posterior temporal gyri) all showed repetition suppression to both sense and reference processing. These areas are probably involved in mapping reference onto sense, the crucial step in semantic encoding. These results enable us to track the transition from non-verbal to verbal representations in our brains.
  • Menenti, L., Segaert, K., & Hagoort, P. (2012). The neuronal infrastructure of speaking. Brain and Language, 122, 71-80. doi:10.1016/j.bandl.2012.04.012.

    Abstract

    Models of speaking distinguish producing meaning, words and syntax as three different linguistic components of speaking. Nevertheless, little is known about the brain’s integrated neuronal infrastructure for speech production. We investigated semantic, lexical and syntactic aspects of speaking using fMRI. In a picture description task, we manipulated repetition of sentence meaning, words, and syntax separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these processes. We demonstrate that semantic, lexical and syntactic processes are carried out in partly overlapping and partly distinct brain networks and show that the classic left-hemispheric dominance for language is present for syntax but not semantics.
  • Menenti, L., Pickering, M. J., & Garrod, S. C. (2012). Towards a neural basis of interactive alignment in conversation. Frontiers in Human Neuroscience, 6, 185. doi:10.3389/fnhum.2012.00185.

    Abstract

    The interactive-alignment account of dialogue proposes that interlocutors achieve conversational success by aligning their understanding of the situation under discussion. Such alignment occurs because they prime each other at different levels of representation (e.g., phonology, syntax, semantics), and this is possible because these representations are shared across production and comprehension. In this paper, we briefly review the behavioral evidence, and then consider how findings from cognitive neuroscience might lend support to this account, on the assumption that alignment of neural activity corresponds to alignment of mental states. We first review work supporting representational parity between production and comprehension, and suggest that neural activity associated with phonological, lexical, and syntactic aspects of production and comprehension are closely related. We next consider evidence for the neural bases of the activation and use of situation models during production and comprehension, and how these demonstrate the activation of non-linguistic conceptual representations associated with language use. We then review evidence for alignment of neural mechanisms that are specific to the act of communication. Finally, we suggest some avenues of further research that need to be explored to test crucial predictions of the interactive alignment account.
  • Meyer, A. S., Wheeldon, L. R., Van der Meulen, F., & Konopka, A. E. (2012). Effects of speech rate and practice on the allocation of visual attention in multiple object naming. Frontiers in Psychology, 3, 39. doi:10.3389/fpsyg.2012.00039.

    Abstract

    Earlier studies had shown that speakers naming several objects typically look at each object until they have retrieved the phonological form of its name and therefore look longer at objects with long names than at objects with shorter names. We examined whether this tight eye-to-speech coordination was maintained at different speech rates and after increasing amounts of practice. Participants named the same set of objects with monosyllabic or disyllabic names on up to 20 successive trials. In Experiment 1, they spoke as fast as they could, whereas in Experiment 2 they had to maintain a fixed moderate or faster speech rate. In both experiments, the durations of the gazes to the objects decreased with increasing speech rate, indicating that at higher speech rates, the speakers spent less time planning the object names. The eye-speech lag (the time interval between the shift of gaze away from an object and the onset of its name) was independent of the speech rate but became shorter with increasing practice. Consistent word length effects on the durations of the gazes to the objects and the eye speech lags were only found in Experiment 2. The results indicate that shifts of eye gaze are often linked to the completion of phonological encoding, but that speakers can deviate from this default coordination of eye gaze and speech, for instance when the descriptive task is easy and they aim to speak fast.
  • Minagawa-Kawai, Y., Cristià, A., & Dupoux, E. (2012). Erratum to “Cerebral lateralization and early speech acquisition: A developmental scenario” [Dev. Cogn. Neurosci. 1 (2011) 217–232]. Developmental Cognitive Neuroscience, 2(1), 194-195. doi:10.1016/j.dcn.2011.07.011.

    Abstract

    Refers to Yasuyo Minagawa-Kawai, Alejandrina Cristià, Emmanuel Dupoux "Cerebral lateralization and early speech acquisition: A developmental scenario" Developmental Cognitive Neuroscience, Volume 1, Issue 3, July 2011, Pages 217-232
  • Misersky, J., Gygax, P. M., Canal, P., Gabriel, U., Garnham, A., Braun, F., Chiarini, T., Englund, K., Hanulíková, A., Öttl, A., Valdrova, J., von Stockhausen, L., & Sczesny, S. (2014). Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak. Behavior Research Methods, 46(3), 841-871. doi:10.3758/s13428-013-0409-z.

    Abstract

    We collected norms on the gender stereotypicality of an extensive list of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak, to be used as a basis for the selection of stimulus materials in future studies. We present a Web-based tool (available at https://www.unifr.ch/lcg/) that we developed to collect these norms and that we expect to be useful for other researchers, as well. In essence, we provide (a) gender stereotypicality norms across a number of languages and (b) a tool to facilitate cross-language as well as cross-cultural comparisons when researchers are interested in the investigation of the impact of stereotypicality on the processing of role nouns.
  • Mishra, R. K., Singh, N., Pandey, A., & Huettig, F. (2012). Spoken language-mediated anticipatory eye movements are modulated by reading ability: Evidence from Indian low and high literates. Journal of Eye Movement Research, 5(1): 3, pp. 1-10. doi:10.16910/jemr.5.1.3.

    Abstract

    We investigated whether levels of reading ability attained through formal literacy are related to anticipatory language-mediated eye movements. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed in such a way that participants could use semantic, associative, and syntactic information from adjectives and particles (preceding the critical noun) to anticipate the visual target objects. High literates started to shift their eye gaze to the target objects well before target word onset. In the low literacy group this shift of eye gaze occurred only when the target noun (i.e. "door") was heard, more than a second later. Our findings suggest that formal literacy may be important for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as spoken language-mediated eye
    gaze. In the conclusion, we discuss three potential mechanisms of how reading acquisition and practice may contribute to the differences in predictive spoken language processing between low and high literates.
  • Mitterer, H., & Tuinman, A. (2012). The role of native-language knowledge in the perception of casual speech in a second language. Frontiers in Psychology, 3, 249. doi:10.3389/fpsyg.2012.00249.

    Abstract

    Casual speech processes, such as /t/-reduction, make word recognition harder. Additionally, word-recognition is also harder in a second language (L2). Combining these challenges, we investigated whether L2 learners have recourse to knowledge from their native language (L1) when dealing with casual-speech processes in their L2. In three experiments, production and perception of /t/-reduction was investigated. An initial production experiment showed that /t/-reduction occurred in both languages and patterned similarly in proper nouns but differed when /t/ was a verbal inflection. Two perception experiments compared the performance of German learners of Dutch with that of native speakers for nouns and verbs. Mirroring the production patterns, German learners' performance strongly resembled that of native Dutch listeners when the reduced /t/ was part of a word stem, but deviated where /t/ was a verbal inflection. These results suggest that a casual speech process in a second language is problematic for learners when the process is not known from the leaner's native language, similar to what has been observed for phoneme contrasts.
  • Moisik, S. R., Lin, H., & Esling, J. H. (2014). A study of laryngeal gestures in Mandarin citation tones using simultaneous laryngoscopy and laryngeal ultrasound (SLLUS). Journal of the International Phonetic Association, 44, 21-58. doi:10.1017/S0025100313000327.

    Abstract

    In this work, Mandarin tone production is examined using simultaneous laryngoscopy and laryngeal ultrasound (SLLUS). Laryngoscopy is used to obtain information about laryngeal state, and laryngeal ultrasound is used to quantify changes in larynx height. With this methodology, several observations are made concerning the production of Mandarin tone in citation form. Two production strategies are attested for low tone production: (i) larynx lowering and (ii) larynx raising with laryngeal constriction. Another finding is that the larynx rises continually during level tone production, which is interpreted as a means to compensate for declining subglottal pressure. In general, we argue that larynx height plays a supportive role in facilitating f0 change under circumstances where intrinsic mechanisms for f0 control are insufficient to reach tonal targets due to vocal fold inertia. Activation of the laryngeal constrictor can be used to achieve low tone targets through mechanical adjustment to vocal fold dynamics. We conclude that extra-glottal laryngeal mechanisms play important roles in facilitating the production of tone targets and should be integrated into the contemporary articulatory model of tone production
  • Moisik, S. R., & Esling, J. H. (2014). Modeling biomechanical influence of epilaryngeal stricture on the vocal folds: A low-dimensional model of vocal-ventricular coupling. Journal of Speech, Language, and Hearing Research, 57, S687-S704. doi:10.1044/2014_JSLHR-S-12-0279.

    Abstract

    Purpose: Physiological and phonetic studies suggest that, at moderate levels of epilaryngeal stricture, the ventricular folds impinge upon the vocal folds and influence their dynamical behavior, which is thought to be responsible for constricted laryngeal sounds. In this work, the authors examine this hypothesis through biomechanical modeling. Method: The dynamical response of a low-dimensional, lumped-element model of the vocal folds under the influence of vocal-ventricular fold coupling was evaluated. The model was assessed for F0 and cover-mass phase difference. Case studies of simulations of different constricted phonation types and of glottal stop illustrate various additional aspects of model performance. Results: Simulated vocal-ventricular fold coupling lowers F0 and perturbs the mucosal wave. It also appears to reinforce irregular patterns of oscillation, and it can enhance laryngeal closure in glottal stop production. Conclusion: The effects of simulated vocal-ventricular fold coupling are consistent with sounds, such as creaky voice, harsh voice, and glottal stop, that have been observed to involve epilaryngeal stricture and apparent contact between the vocal folds and ventricular folds. This supports the view that vocal-ventricular fold coupling is important in the vibratory dynamics of such sounds and, furthermore, suggests that these sounds may intrinsically require epilaryngeal stricture
  • Moseley, R., Carota, F., Hauk, O., Mohr, B., & Pulvermüller, F. (2012). A role for the motor system in binding abstract emotional meaning. Cerebral Cortex, 22(7), 1634-1647. doi:10.1093/cercor/bhr238.

    Abstract

    Sensorimotor areas activate to action- and object-related words, but their role in abstract meaning processing is still debated. Abstract emotion words denoting body internal states are a critical test case because they lack referential links to objects. If actions expressing emotion are crucial for learning correspondences between word forms and emotions, emotion word–evoked activity should emerge in motor brain systems controlling the face and arms, which typically express emotions. To test this hypothesis, we recruited 18 native speakers and used event-related functional magnetic resonance imaging to compare brain activation evoked by abstract emotion words to that by face- and arm-related action words. In addition to limbic regions, emotion words indeed sparked precentral cortex, including body-part–specific areas activated somatotopically by face words or arm words. Control items, including hash mark strings and animal words, failed to activate precentral areas. We conclude that, similar to their role in action word processing, activation of frontocentral motor systems in the dorsal stream reflects the semantic binding of sign and meaning of abstract words denoting emotions and possibly other body internal states.
  • Mulder, K., Dijkstra, T., Schreuder, R., & Baayen, R. H. (2014). Effects of primary and secondary morphological family size in monolingual and bilingual word processing. Journal of Memory and Language, 72, 59-84. doi:10.1016/j.jml.2013.12.004.

    Abstract

    This study investigated primary and secondary morphological family size effects in monolingual and bilingual processing, combining experimentation with computational modeling. Family size effects were investigated in an English lexical decision task for Dutch-English bilinguals and English monolinguals using the same materials. To account for the possibility that family size effects may only show up in words that resemble words in the native language of the bilinguals, the materials included, in addition to purely English items, Dutch-English cognates (identical and non-identical in form). As expected, the monolingual data revealed facilitatory effects of English primary family size. Moreover, while the monolingual data did not show a main effect of cognate status, only form-identical cognates revealed an inhibitory effect of English secondary family size. The bilingual data showed stronger facilitation for identical cognates, but as for monolinguals, this effect was attenuated for words with a large secondary family size. In all, the Dutch-English primary and secondary family size effects in bilinguals were strikingly similar to those of monolinguals. Computational simulations suggest that the primary and secondary family size effects can be understood in terms of discriminative learning of the English lexicon. (C) 2014 Elsevier Inc. All rights reserved.

    Files private

    Request files
  • Nakayama, M., Verdonschot, R. G., Sears, C. R., & Lupker, S. J. (2014). The masked cognate translation priming effect for different-script bilinguals is modulated by the phonological similarity of cognate words: Further support for the phonological account. Journal of Cognitive Psychology, 26(7), 714-724. doi:10.1080/20445911.2014.953167.

    Abstract

    The effect of phonological similarity on L1-L2 cognate translation priming was examined with Japanese-English bilinguals. According to the phonological account, the cognate priming effect for different-script bilinguals consists of additive effects of phonological and conceptual facilitation. If true, then the size of the cognate priming effect would be directly influenced by the phonological similarity of cognate translation equivalents. The present experiment tested and confirmed this prediction: the cognate priming effect was significantly larger for cognate prime-target pairs with high-phonological similarity than pairs with low-phonological similarity. Implications for the nature of lexical processing in same-versus different-script bilinguals are discussed.
  • Neger, T. M., Rietveld, T., & Janse, E. (2014). Relationship between perceptual learning in speech and statistical learning in younger and older adults. Frontiers in Human Neuroscience, 8: 628. doi:10.3389/fnhum.2014.00628.

    Abstract

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with sixty meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
  • Nieuwland, M. S., Martin, A. E., & Carreiras, M. (2012). Brain regions that process case: Evidence from basque. Human Brain Mapping, 33(11), 2509-2520. doi:10.1002/hbm.21377.

    Abstract

    The aim of this event-related fMRI study was to investigate the cortical networks involved in case processing, an operation that is crucial to language comprehension yet whose neural underpinnings are not well-understood. What is the relationship of these networks to those that serve other aspects of syntactic and semantic processing? Participants read Basque sentences that contained case violations, number agreement violations or semantic anomalies, or that were both syntactically and semantically correct. Case violations elicited activity increases, compared to correct control sentences, in a set of parietal regions including the posterior cingulate, the precuneus, and the left and right inferior parietal lobules. Number agreement violations also elicited activity increases in left and right inferior parietal regions, and additional activations in the left and right middle frontal gyrus. Regions-of-interest analyses showed that almost all of the clusters that were responsive to case or number agreement violations did not differentiate between these two. In contrast, the left and right anterior inferior frontal gyrus and the dorsomedial prefrontal cortex were only sensitive to semantic violations. Our results suggest that whereas syntactic and semantic anomalies clearly recruit distinct neural circuits, case, and number violations recruit largely overlapping neural circuits and that the distinction between the two rests on the relative contributions of parietal and prefrontal regions, respectively. Furthermore, our results are consistent with recently reported contributions of bilateral parietal and dorsolateral brain regions to syntactic processing, pointing towards potential extensions of current neurocognitive theories of language. Hum Brain Mapp, 2012. © 2011 Wiley Periodicals, Inc.
  • Nieuwland, M. S. (2012). Establishing propositional truth-value in counterfactual and real-world contexts during sentence comprehension: Differential sensitivity of the left and right inferior frontal gyri. NeuroImage, 59(4), 3433-3440. doi:10.1016/j.neuroimage.2011.11.018.

    Abstract

    What makes a proposition true or false has traditionally played an essential role in philosophical and linguistic theories of meaning. A comprehensive neurobiological theory of language must ultimately be able to explain the combined contributions of real-world truth-value and discourse context to sentence meaning. This fMRI study investigated the neural circuits that are sensitive to the propositional truth-value of sentences about counterfactual worlds, aiming to reveal differential hemispheric sensitivity of the inferior prefrontal gyri to counterfactual truth-value and real-world truth-value. Participants read true or false counterfactual conditional sentences (“If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would be Russia/America”) and real-world sentences (“Because N.A.S.A. developed its Apollo Project, the first country to land on the moon has been America/Russia”) that were matched on contextual constraint and truth-value. ROI analyses showed that whereas the left BA 47 showed similar activity increases to counterfactual false sentences and to real-world false sentences (compared to true sentences), the right BA 47 showed a larger increase for counterfactual false sentences. Moreover, whole-brain analyses revealed a distributed neural circuit for dealing with propositional truth-value. These results constitute the first evidence for hemispheric differences in processing counterfactual truth-value and real-world truth-value, and point toward additional right hemisphere involvement in counterfactual comprehension.
  • Nieuwland, M. S. (2014). “Who’s he?” Event-related brain potentials and unbound pronouns. Journal of Memory and Language, 76, 1-28. doi:10.1016/j.jml.2014.06.002.

    Abstract

    Three experiments used event-related potentials to examine the processing consequences of gender-mismatching pronouns (e.g., “The aunt found out that he had won the lottery”), which have been shown to elicit P600 effects when judged as syntactically anomalous (Osterhout & Mobley, 1995). In each experiment, mismatching pronouns elicited a sustained, frontal negative shift (Nref) compared to matching pronouns: when participants were instructed to posit a new referent for mismatching pronouns (Experiment 1), and without this instruction (Experiments 2 and 3). In Experiments 1 and 2, the observed Nref was robust only in individuals with higher reading span scores. In Experiment 1, participants with lower reading span showed P600 effects instead, consistent with an attempt at coreferential interpretation despite gender mismatch. The results from the experiments combined suggest that, in absence of an acceptability judgment task, people are more likely to interpret mismatching pronouns as referring to an unknown, unheralded antecedent than as a grammatically anomalous anaphor for a given antecedent.
  • Nieuwland, M. S., & Martin, A. E. (2012). If the real world were irrelevant, so to speak: The role of propositional truth-value in counterfactual sentence comprehension. Cognition, 122(1), 102-109. doi:10.1016/j.cognition.2011.09.001.

    Abstract

    Propositional truth-value can be a defining feature of a sentence’s relevance to the unfolding discourse, and establishing propositional truth-value in context can be key to successful interpretation. In the current study, we investigate its role in the comprehension of counterfactual conditionals, which describe imaginary consequences of hypothetical events, and are thought to require keeping in mind both what is true and what is false. Pre-stored real-world knowledge may therefore intrude upon and delay counterfactual comprehension, which is predicted by some accounts of discourse comprehension, and has been observed during online comprehension. The impact of propositional truth-value may thus be delayed in counterfactual conditionals, as also claimed for sentences containing other types of logical operators (e.g., negation, scalar quantifiers). In an event-related potential (ERP) experiment, we investigated the impact of propositional truth-value when described consequences are both true and predictable given the counterfactual premise. False words elicited larger N400 ERPs than true words, in negated counterfactual sentences (e.g., “If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would have been Russia/America”) and real-world sentences (e.g., “Because N.A.S.A. developed its Apollo Project, the first country to land on the moon was America/Russia”) alike. These indistinguishable N400 effects of propositional truth-value, elicited by opposite word pairs, argue against disruptions by real-world knowledge during counterfactual comprehension, and suggest that incoming words are mapped onto the counterfactual context without any delay. Thus, provided a sufficiently constraining context, propositional truth-value rapidly impacts ongoing semantic processing, be the proposition factual or counterfactual.
  • Nitschke, S., Serratrice, L., & Kidd, E. (2014). The effect of linguistic nativeness on structural priming in comprehension. Language, Cognition and Neuroscience, 29(5), 525-542. doi:10.1080/01690965.2013.766355.

    Abstract

    The role of linguistic experience in structural priming is unclear. Although it is explicitly predicted that experience contributes to priming effects on several theoretical accounts, to date the empirical data has been mixed. To investigate this issue, we conducted four sentence-picture-matching experiments that primed for the comprehension of object relative clauses in L1 and proficient L2 speakers of German. It was predicted that an effect of experience would only be observed in instances where priming effects are likely to be weak in experienced L1 speakers. In such circumstances, priming should be stronger in L2 speakers because of their comparative lack of experience using and processing the L2 test structures. The experiments systematically manipulated the primes to decrease lexical and conceptual overlap between primes and targets. The results supported the hypothesis: in two of the four studies, the L2 group showed larger priming effects in comparison to the L1 group. This effect only occurred when animacy differences were introduced between the prime and target. The results suggest that linguistic experience as operationalised by nativeness affects the strength of priming, specifically in cases where there is a lack of lexical and conceptual overlap between prime and target.
  • Noordenbos, M., Segers, E., Serniclaes, W., Mitterer, H., & Verhoeven, L. (2012). Allophonic mode of speech perception in Dutch children at risk for dyslexia: A longitudinal study. Research in developmental disabilities, 33, 1469-1483. doi:10.1016/j.ridd.2012.03.021.

    Abstract

    There is ample evidence that individuals with dyslexia have a phonological deficit. A growing body of research also suggests that individuals with dyslexia have problems with categorical perception, as evidenced by weaker discrimination of between-category differences and better discrimination of within-category differences compared to average readers. Whether the categorical perception problems of individuals with dyslexia are a result of their reading problems or a cause has yet to be determined. Whether the observed perception deficit relates to a more general auditory deficit or is specific to speech also has yet to be determined. To shed more light on these issues, the categorical perception abilities of children at risk for dyslexia and chronological age controls were investigated before and after the onset of formal reading instruction in a longitudinal study. Both identification and discrimination data were collected using identical paradigms for speech and non-speech stimuli. Results showed the children at risk for dyslexia to shift from an allophonic mode of perception in kindergarten to a phonemic mode of perception in first grade, while the control group showed a phonemic mode already in kindergarten. The children at risk for dyslexia thus showed an allophonic perception deficit in kindergarten, which was later suppressed by phonemic perception as a result of formal reading instruction in first grade; allophonic perception in kindergarten can thus be treated as a clinical marker for the possibility of later reading problems.
  • Noordenbos, M., Segers, E., Serniclaes, W., Mitterer, H., & Verhoeven, L. (2012). Neural evidence of allophonic perception in children at risk for dyslexia. Neuropsychologia, 50, 2010-2017. doi:10.1016/j.neuropsychologia.2012.04.026.

    Abstract

    Learning to read is a complex process that develops normally in the majority of children and requires the mapping of graphemes to their corresponding phonemes. Problems with the mapping process nevertheless occur in about 5% of the population and are typically attributed to poor phonological representations, which are — in turn — attributed to underlying speech processing difficulties. We examined auditory discrimination of speech sounds in 6-year-old beginning readers with a familial risk of dyslexia (n=31) and no such risk (n=30) using the mismatch negativity (MMN). MMNs were recorded for stimuli belonging to either the same phoneme category (acoustic variants of/bə/) or different phoneme categories (/bə/vs./də/). Stimuli from different phoneme categories elicited MMNs in both the control and at-risk children, but the MMN amplitude was clearly lower in the at-risk children. In contrast, the stimuli from the same phoneme category elicited an MMN in only the children at risk for dyslexia. These results show children at risk for dyslexia to be sensitive to acoustic properties that are irrelevant in their language. Our findings thus suggest a possible cause of dyslexia in that they show 6-year-old beginning readers with at least one parent diagnosed with dyslexia to have a neural sensitivity to speech contrasts that are irrelevant in the ambient language. This sensitivity clearly hampers the development of stable phonological representations and thus leads to significant reading impairment later in life.
  • Nora, A., Hultén, A., Karvonen, L., Kim, J.-Y., Lehtonen, M., Yli-Kaitala, H., Service, E., & Salmelin, R. (2012). Long-term phonological learning begins at the level of word form. NeuroImage, 63, 789-799. doi:10.1016/j.neuroimage.2012.07.026.

    Abstract

    Incidental learning of phonological structures through repeated exposure is an important component of native and foreign-language vocabulary acquisition that is not well understood at the neurophysiological level. It is also not settled when this type of learning occurs at the level of word forms as opposed to phoneme sequences. Here, participants listened to and repeated back foreign phonological forms (Korean words) and new native-language word forms (Finnish pseudowords) on two days. Recognition performance was improved, repetition latency became shorter and repetition accuracy increased when phonological forms were encountered multiple times. Cortical magnetoencephalography responses occurred bilaterally but the experimental effects only in the left hemisphere. Superior temporal activity at 300–600 ms, probably reflecting acoustic-phonetic processing, lasted longer for foreign phonology than for native phonology. Formation of longer-term auditory-motor representations was evidenced by a decrease of a spatiotemporally separate left temporal response and correlated increase of left frontal activity at 600–1200 ms on both days. The results point to item-level learning of novel whole-word representations.
  • Norris, D., McQueen, J. M., & Cutler, A. (1995). Competition and segmentation in spoken word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1209-1228.

    Abstract

    Spoken utterances contain few reliable cues to word boundaries, but listeners nonetheless experience little difficulty identifying words in continuous speech. The authors present data and simulations that suggest that this ability is best accounted for by a model of spoken-word recognition combining competition between alternative lexical candidates and sensitivity to prosodic structure. In a word-spotting experiment, stress pattern effects emerged most clearly when there were many competing lexical candidates for part of the input. Thus, competition between simultaneously active word candidates can modulate the size of prosodic effects, which suggests that spoken-word recognition must be sensitive both to prosodic structure and to the effects of competition. A version of the Shortlist model ( D. G. Norris, 1994b) incorporating the Metrical Segmentation Strategy ( A. Cutler & D. Norris, 1988) accurately simulates the results using a lexicon of more than 25,000 words.
  • Nudel, R., Simpson, N. H., Baird, G., O’Hare, A., Conti-Ramsden, G., Bolton, P. F., Hennessy, E. R., SLI Consortium, Monaco, A. P., Fairfax, B. P., Knight, J. C., Winney, B., Fisher, S. E., & Newbury, D. F. (2014). Associations of HLA alleles with specific language impairment. Journal of Neurodevelopmental Disorders, 6: 1. doi:10.1186/1866-1955-6-1.

    Abstract

    Background Human leukocyte antigen (HLA) loci have been implicated in several neurodevelopmental disorders in which language is affected. However, to date, no studies have investigated the possible involvement of HLA loci in specific language impairment (SLI), a disorder that is defined primarily upon unexpected language impairment. We report association analyses of single-nucleotide polymorphisms (SNPs) and HLA types in a cohort of individuals affected by language impairment. Methods We perform quantitative association analyses of three linguistic measures and case-control association analyses using both SNP data and imputed HLA types. Results Quantitative association analyses of imputed HLA types suggested a role for the HLA-A locus in susceptibility to SLI. HLA-A A1 was associated with a measure of short-term memory (P = 0.004) and A3 with expressive language ability (P = 0.006). Parent-of-origin effects were found between HLA-B B8 and HLA-DQA1*0501 and receptive language. These alleles have a negative correlation with receptive language ability when inherited from the mother (P = 0.021, P = 0.034, respectively) but are positively correlated with the same trait when paternally inherited (P = 0.013, P = 0.029, respectively). Finally, case control analyses using imputed HLA types indicated that the DR10 allele of HLA-DRB1 was more frequent in individuals with SLI than population controls (P = 0.004, relative risk = 2.575), as has been reported for individuals with attention deficit hyperactivity disorder (ADHD). Conclusion These preliminary data provide an intriguing link to those described by previous studies of other neurodevelopmental disorders and suggest a possible role for HLA loci in language disorders.
  • Nudel, R., Simpson, N. H., Baird, G., O’Hare, A., Conti-Ramsden, G., Bolton, P. F., Hennessy, E. R., The SLli consortium, Ring, S. M., Smith, G. D., Francks, C., Paracchini, S., Monaco, A. P., Fisher, S. E., & Newbury, D. F. (2014). Genome-wide association analyses of child genotype effects and parent-of origin effects in specific language impairment. Genes, Brain and Behavior, 13, 418-429. doi:10.1111/gbb.12127.

    Abstract

    Specific language impairment (SLI) is a neurodevelopmental disorder that affects
    linguistic abilities when development is otherwise normal. We report the results of a genomewide association study of SLI which included parent-of-origin effects and child genotype effects and used 278 families of language-impaired children. The child genotype effects analysis did not identify significant associations. We found genome-wide significant paternal
    parent-of-origin effects on chromosome 14q12 (P=3.74×10-8) and suggestive maternal parent-of-origin-effects on chromosome 5p13 (P=1.16×10-7). A subsequent targeted association of six single-nucleotide-polymorphisms (SNPs) on chromosome 5 in 313 language-impaired individuals from the ALSPAC cohort replicated the maternal effects,
    albeit in the opposite direction (P=0.001); as fathers’ genotypes were not available in the ALSPAC study, the replication analysis did not include paternal parent-of-origin effects. The paternally-associated SNP on chromosome 14 yields a non-synonymous coding change within the NOP9 gene. This gene encodes an RNA-binding protein that has been reported to be significantly dysregulated in individuals with schizophrenia. The region of maternal
    association on chromosome 5 falls between the PTGER4 and DAB2 genes, in a region
    previously implicated in autism and ADHD. The top SNP in this association locus is a
    potential expression QTL of ARHGEF19 (also called WGEF) on chromosome 1. Members of this protein family have been implicated in intellectual disability. In sum, this study implicates parent-of-origin effects in language impairment, and adds an interesting new dimension to the emerging picture of shared genetic etiology across various neurodevelopmental disorders.
  • Oliver, G., Gullberg, M., Hellwig, F., Mitterer, H., & Indefrey, P. (2012). Acquiring L2 sentence comprehension: A longitudinal study of word monitoring in noise. Bilingualism: Language and Cognition, 15, 841 -857. doi:10.1017/S1366728912000089.

    Abstract

    This study investigated the development of second language online auditory processing with ab initio German learners of Dutch. We assessed the influence of different levels of background noise and different levels of semantic and syntactic target word predictability on word-monitoring latencies. There was evidence of syntactic, but not lexical-semantic, transfer from the L1 to the L2 from the onset of L2 learning. An initial stronger adverse effect of noise on syntactic compared to phonological processing disappeared after two weeks of learning Dutch suggesting a change towards more robust syntactic processing. At the same time the L2 learners started to exploit semantic constraints predicting upcoming target words. The use of semantic predictability remained less efficient compared to native speakers until the end of the observation period. The improvement and the persistent problems in semantic processing we found were independent of noise and rather seem to reflect the need for more context information to build up online semantic representations in L2 listening.
  • Olivers, C. N. L., Huettig, F., Singh, J. P., & Mishra, R. K. (2014). The influence of literacy on visual search. Visual Cognition, 21, 74-101. doi:10.1080/13506285.2013.875498.

    Abstract

    Currently one in five adults is still unable to read despite a rapidly developing world. Here we show that (il)literacy has important consequences for the cognitive ability of selecting relevant information from a visual display of non-linguistic material. In two experiments we compared low to high literacy observers on both an easy and a more difficult visual search task involving different types of chicken. Low literates were consistently slower (as indicated by overall RTs) in both experiments. More detailed analyses, including eye movement measures, suggest that the slowing is partly due to display wide (i.e. parallel) sensory processing but mainly due to post-selection processes, as low literates needed more time between fixating the target and generating a manual response. Furthermore, high and low literacy groups differed in the way search performance was distributed across the visual field. High literates performed relatively better when the target was presented in central regions, especially on the right. At the same time, high literacy was also associated with a more general bias towards the top and the left, especially in the more difficult search. We conclude that learning to read results in an extension of the functional visual field from the fovea to parafoveal areas, combined with some asymmetry in scan pattern influenced by the reading direction, both of which also influence other (e.g. non-linguistic) tasks such as visual search.

    Files private

    Request files
  • Onnink, A. M. H., Zwiers, M. P., Hoogman, M., Mostert, J. C., Kan, C. C., Buitelaar, J., & Franke, B. (2014). Brain alterations in adult ADHD: Effects of gender, treatment and comorbid depression. European Neuropsychopharmacology, 24(3), 397-409. doi:10.1016/j.euroneuro.2013.11.011.

    Abstract

    Children with attention-deficit/hyperactivity disorder (ADHD) have smaller volumes of total brain matter and subcortical regions, but it is unclear whether these represent delayed maturation or persist into adulthood. We performed a structural MRI study in 119 adult ADHD patients and 107 controls and investigated total gray and white matter and volumes of accumbens, caudate, globus pallidus, putamen, thalamus, amygdala and hippocampus. Additionally, we investigated effects of gender, stimulant treatment and history of major depression (MDD). There was no main effect of ADHD on the volumetric measures, nor was any effect observed in a secondary voxel-based morphometry (VBM) analysis of the entire brain. However, in the volumetric analysis a significant gender by diagnosis interaction was found for caudate volume. Male patients showed reduced right caudate volume compared to male controls, and caudate volume correlated with hyperactive/impulsive symptoms. Furthermore, patients using stimulant treatment had a smaller right hippocampus volume compared to medication-naïve patients and controls. ADHD patients with previous MDD showed smaller hippocampus volume compared to ADHD patients with no MDD. While these data were obtained in a cross-sectional sample and need to be replicated in a longitudinal study, the findings suggest that developmental brain differences in ADHD largely normalize in adulthood. Reduced caudate volume in male patients may point to distinct neurobiological deficits underlying ADHD in the two genders. Smaller hippocampus volume in ADHD patients with previous MDD is consistent with neurobiological alterations observed in MDD.

    Files private

    Request files
  • Ortega, G. (2014). Acquisition of a signed phonological system by hearing adults: The role of sign structure and iconicity. Sign Language and Linguistics, 17, 267-275. doi:10.1075/sll.17.2.09ort.
  • Ozyurek, A. (2014). Hearing and seeing meaning in speech and gesture: Insights from brain and behaviour. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 20130296. doi:10.1098/rstb.2013.0296.

    Abstract

    As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.
  • Pacheco, A., Araújo, S., Faísca, L., de Castro, S. L., Petersson, K. M., & Reis, A. (2014). Dyslexia's heterogeneity: Cognitive profiling of Portuguese children with dyslexia. Reading and Writing, 27(9), 1529-1545. doi:10.1007/s11145-014-9504-5.

    Abstract

    Recent studies have emphasized that developmental dyslexia is a multiple-deficit disorder, in contrast to the traditional single-deficit view. In this context, cognitive profiling of children with dyslexia may be a relevant contribution to this unresolved discussion. The aim of this study was to profile 36 Portuguese children with dyslexia from the 2nd to 5th grade. Hierarchical cluster analysis was used to group participants according to their phonological awareness, rapid automatized naming, verbal short-term memory, vocabulary, and nonverbal intelligence abilities. The results suggested a two-cluster solution: a group with poorer performance on phoneme deletion and rapid automatized naming compared with the remaining variables (Cluster 1) and a group characterized by underperforming on the variables most related to phonological processing (phoneme deletion and digit span), but not on rapid automatized naming (Cluster 2). Overall, the results seem more consistent with a hybrid perspective, such as that proposed by Pennington and colleagues (2012), for understanding the heterogeneity of dyslexia. The importance of characterizing the profiles of individuals with dyslexia becomes clear within the context of constructing remediation programs that are specifically targeted and are more effective in terms of intervention outcome.

    Additional information

    11145_2014_9504_MOESM1_ESM.doc
  • Paternoster, L., Zhurov, A., Toma, A., Kemp, J., St Pourcain, B., Timpson, N., McMahon, G., McArdle, W., Ring, S., Smith, G., Richmond, S., & Evans, D. (2012). Genome-wide Association Study of Three-Dimensional Facial Morphology Identifies a Variant in PAX3 Associated with Nasion Position. The American Journal of Human Genetics, 90(3), 478-485. doi:10.1016/j.ajhg.2011.12.021.

    Abstract

    Craniofacial morphology is highly heritable, but little is known about which genetic variants influence normal facial variation in the general population. We aimed to identify genetic variants associated with normal facial variation in a population-based cohort of 15-year-olds from the Avon Longitudinal Study of Parents and Children. 3D high-resolution images were obtained with two laser scanners, these were merged and aligned, and 22 landmarks were identified and their x, y, and z coordinates used to generate 54 3D distances reflecting facial features. 14 principal components (PCs) were also generated from the landmark locations. We carried out genome-wide association analyses of these distances and PCs in 2,185 adolescents and attempted to replicate any significant associations in a further 1,622 participants. In the discovery analysis no associations were observed with the PCs, but we identified four associations with the distances, and one of these, the association between rs7559271 in PAX3 and the nasion to midendocanthion distance (n-men), was replicated (p = 4 × 10−7). In a combined analysis, each G allele of rs7559271 was associated with an increase in n-men distance of 0.39 mm (p = 4 × 10−16), explaining 1.3% of the variance. Independent associations were observed in both the z (nasion prominence) and y (nasion height) dimensions (p = 9 × 10−9 and p = 9 × 10−10, respectively), suggesting that the locus primarily influences growth in the yz plane. Rare variants in PAX3 are known to cause Waardenburg syndrome, which involves deafness, pigmentary abnormalities, and facial characteristics including a broad nasal bridge. Our findings show that common variants within this gene also influence normal craniofacial development.
  • Payne, B. R., Grison, S., Gao, X., Christianson, K., Morrow, D. G., & Stine-Morrow, E. A. L. (2014). Aging and individual differences in binding during sentence understanding: Evidence from temporary and global syntactic attachment ambiguities. Cognition, 130(2), 157-173. doi:10.1016/j.cognition.2013.10.005.

    Abstract

    We report an investigation of aging and individual differences in binding information during sentence understanding. An age-continuous sample of adults (N=91), ranging from 18 to 81 years of age, read sentences in which a relative clause could be attached high to a head noun NP1, attached low to its modifying prepositional phrase NP2 (e.g., The son of the princess who scratched himself/herself in public was humiliated), or in which the attachment site of the relative clause was ultimately indeterminate (e.g., The maid of the princess who scratched herself in public was humiliated). Word-by-word reading times and comprehension (e.g., who scratched?) were measured. A series of mixed-effects models were fit to the data, revealing: (1) that, on average, NP1-attached sentences were harder to process and comprehend than NP2-attached sentences; (2) that these average effects were independently moderated by verbal working memory capacity and reading experience, with effects that were most pronounced in the oldest participants and; (3) that readers on average did not allocate extra time to resolve global ambiguities, though older adults with higher working memory span did. Findings are discussed in relation to current models of lifespan cognitive development, working memory, language experience, and the role of prosodic segmentation strategies in reading. Collectively, these data suggest that aging brings differences in sentence understanding, and these differences may depend on independent influences of verbal working memory capacity and reading experience.

    Files private

    Request files
  • Peeters, D., Runnqvist, E., Bertrand, D., & Grainger, J. (2014). Asymmetrical switch costs in bilingual language production induced by reading words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 284-292. doi:10.1037/a0034060.

    Abstract

    We examined language-switching effects in French–English bilinguals using a paradigm where pictures are always named in the same language (either French or English) within a block of trials, and on each trial, the picture is preceded by a printed word from the same language or from the other language. Participants had to either make a language decision on the word or categorize it as an animal name or not. Picture-naming latencies in French (Language 1 [L1]) were slower when pictures were preceded by an English word than by a French word, independently of the task performed on the word. There were no language-switching effects when pictures were named in English (L2). This pattern replicates asymmetrical switch costs found with the cued picture-naming paradigm and shows that the asymmetrical pattern can be obtained (a) in the absence of artificial (nonlinguistic) language cues, (b) when the switch involves a shift from comprehension in 1 language to production in another, and (c) when the naming language is blocked (univalent response). We concluded that language switch costs in bilinguals cannot be reduced to effects driven by task control or response-selection mechanisms.
  • Peeters, D., & Dresler, M. (2014). The scientific significance of sleep-talking. Frontiers for Young Minds, 2(9). Retrieved from http://kids.frontiersin.org/articles/24/the_scientific_significance_of_sleep_talking/.

    Abstract

    Did one of your parents, siblings, or friends ever tell you that you were talking in your sleep? Nothing to be ashamed of! A recent study found that more than half of all people have had the experience of speaking out loud while being asleep [1]. This might even be underestimated, because often people do not notice that they are sleep-talking, unless somebody wakes them up or tells them the next day. Most neuroscientists, linguists, and psychologists studying language are interested in our language production and language comprehension skills during the day. In the present article, we will explore what is known about the production of overt speech during the night. We suggest that the study of sleep-talking may be just as interesting and informative as the study of wakeful speech.
  • Perlman, M., & Cain, A. A. (2014). Iconicity in vocalization, comparisons with gesture, and implications for theories on the evolution of language. Gesture, 14(3), 320-350. doi:10.1075/gest.14.3.03per.

    Abstract

    Scholars have often reasoned that vocalizations are extremely limited in their potential for iconic expression, especially in comparison to manual gestures (e.g., Armstrong & Wilcox, 2007; Tomasello, 2008). As evidence for an alternative view, we first review the growing body of research related to iconicity in vocalizations, including experimental work on sound symbolism, cross-linguistic studies documenting iconicity in the grammars and lexicons of languages, and experimental studies that examine iconicity in the production of speech and vocalizations. We then report an experiment in which participants created vocalizations to communicate 60 different meanings, including 30 antonymic pairs. The vocalizations were measured along several acoustic properties, and these properties were compared between antonyms. Participants were highly consistent in the kinds of sounds they produced for the majority of meanings, supporting the hypothesis that vocalization has considerable potential for iconicity. In light of these findings, we present a comparison between vocalization and manual gesture, and examine the detailed ways in which each modality can function in the iconic expression of particular kinds of meanings. We further discuss the role of iconic vocalizations and gesture in the evolution of language since our divergence from the great apes. In conclusion, we suggest that human communication is best understood as an ensemble of kinesis and vocalization, not just speech, in which expression in both modalities spans the range from arbitrary to iconic.
  • Perniss, P. M., Vinson, D., Seifart, F., & Vigliocco, G. (2012). Speaking of shape: The effects of language-specific encoding on semantic representations. Language and Cognition, 4, 223-242. doi:10.1515/langcog-2012-0012.

    Abstract

    The question of whether different linguistic patterns differentially influence semantic and conceptual representations is of central interest in cognitive science. In this paper, we investigate whether the regular encoding of shape within a nominal classification system leads to an increased salience of shape in speakers' semantic representations by comparing English, (Amazonian) Spanish, and Bora, a shape-based classifier language spoken in the Amazonian regions of Columbia and Peru. Crucially, in displaying obligatory use, pervasiveness in grammar, high discourse frequency, and phonological variability of forms corresponding to particular shape features, the Bora classifier system differs in important ways from those in previous studies investigating effects of nominal classification, thereby allowing better control of factors that may have influenced previous findings. In addition, the inclusion of Spanish monolinguals living in the Bora village allowed control for the possibility that differences found between English and Bora speakers may be attributed to their very different living environments. We found that shape is more salient in the semantic representation of objects for speakers of Bora, which systematically encodes shape, than for speakers of English and Spanish, which do not. Our results are consistent with assumptions that semantic representations are shaped and modulated by our specific linguistic experiences.
  • Petersson, K. M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets [Review article]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 1971-1883. doi:10.1098/rstb.2012.0101.

    Abstract

    The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.
  • Petersson, K. M., Folia, V., & Hagoort, P. (2012). What artificial grammar learning reveals about the neurobiology of syntax. Brain and Language, 120, 83-95. doi:10.1016/j.bandl.2010.08.003.

    Abstract

    In this paper we examine the neurobiological correlates of syntax, the processing of structured sequences, by comparing FMRI results on artificial and natural language syntax. We discuss these and similar findings in the context of formal language and computability theory. We used a simple right-linear unification grammar in an implicit artificial grammar learning paradigm in 32 healthy Dutch university students (natural language FMRI data were already acquired for these participants). We predicted that artificial syntax processing would engage the left inferior frontal region (BA 44/45) and that this activation would overlap with syntax-related variability observed in the natural language experiment. The main findings of this study show that the left inferior frontal region centered on BA 44/45 is active during artificial syntax processing of well-formed (grammatical) sequence independent of local subsequence familiarity. The same region is engaged to a greater extent when a syntactic violation is present and structural unification becomes difficult or impossible. The effects related to artificial syntax in the left inferior frontal region (BA 44/45) were essentially identical when we masked these with activity related to natural syntax in the same subjects. Finally, the medial temporal lobe was deactivated during this operation, consistent with the view that implicit processing does not rely on declarative memory mechanisms that engage the medial temporal lobe. In the context of recent FMRI findings, we raise the question whether Broca’s region (or subregions) is specifically related to syntactic movement operations or the processing of hierarchically nested non-adjacent dependencies in the discussion section. We conclude that this is not the case. Instead, we argue that the left inferior frontal region is a generic on-line sequence processor that unifies information from various sources in an incremental and recursive manner, independent of whether there are any processing requirements related to syntactic movement or hierarchically nested structures. In addition, we argue that the Chomsky hierarchy is not directly relevant for neurobiological systems.
  • Pettenati, P., Sekine, K., Congestrì, E., & Volterra, V. (2012). A comparative study on representational gestures in Italian and Japanese children. Journal of Nonverbal Behavior, 36(2), 149-164. doi:10.1007/s10919-011-0127-0.

    Abstract

    This study compares words and gestures produced in a controlled experimental setting by children raised in different linguistic/cultural environments to examine the robustness of gesture use at an early stage of lexical development. Twenty-two Italian and twenty-two Japanese toddlers (age range 25–37 months) performed the same picture-naming task. Italians produced more spoken correct labels than Japanese but a similar amount of representational gestures temporally matched with words. However, Japanese gestures reproduced more closely the action represented in the picture. Results confirm that gestures are linked to motor actions similarly for all children, suggesting a common developmental stage, only minimally influenced by culture.
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2014). Distinct patterns of brain activity characterise lexical activation and competition in spoken word production. PLoS One, 9(2): e88674. doi:10.1371/journal.pone.0088674.

    Abstract

    According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog) with distractor words. The distractor and picture name were semantically related (cat), unrelated (pin), or identical (dog). Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350–650 ms (4–10 Hz) in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.

Share this page