Publications

Displaying 401 - 500 of 863
  • Kurt, S., Groszer, M., Fisher, S. E., & Ehret, G. (2009). Modified sound-evoked brainstem potentials in Foxp2 mutant mice. Brain Research, 1289, 30-36. doi:10.1016/j.brainres.2009.06.092.

    Abstract

    Heterozygous mutations of the human FOXP2 gene cause a developmental disorder involving impaired learning and production of fluent spoken language. Previous investigations of its aetiology have focused on disturbed function of neural circuits involved in motor control. However, Foxp2 expression has been found in the cochlea and auditory brain centers and deficits in auditory processing could contribute to difficulties in speech learning and production. Here, we recorded auditory brainstem responses (ABR) to assess two heterozygous mouse models carrying distinct Foxp2 point mutations matching those found in humans with FOXP2-related speech/language impairment. Mice which carry a Foxp2-S321X nonsense mutation, yielding reduced dosage of Foxp2 protein, did not show systematic ABR differences from wildtype littermates. Given that speech/language disorders are observed in heterozygous humans with similar nonsense mutations (FOXP2-R328X), our findings suggest that auditory processing deficits up to the midbrain level are not causative for FOXP2-related language impairments. Interestingly, however, mice harboring a Foxp2-R552H missense mutation displayed systematic alterations in ABR waves with longer latencies (significant for waves I, III, IV) and smaller amplitudes (significant for waves I, IV) suggesting that either the synchrony of synaptic transmission in the cochlea and in auditory brainstem centers is affected, or fewer auditory nerve fibers and fewer neurons in auditory brainstem centers are activated compared to wildtypes. Therefore, the R552H mutation uncovers possible roles for Foxp2 in the development and/or function of the auditory system. Since ABR audiometry is easily accessible in humans, our data call for systematic testing of auditory functions in humans with FOXP2 mutations.
  • Kushnick, G., Gray, R., & Jordan, F. (2014). The sequential evolution of land tenure norms. Evolution and Human Behavior, 35, 309-318. doi:10.1016/j.evolhumbehav.2014.03.001.
  • Lahey, M., & Ernestus, M. (2014). Pronunciation variation in infant-directed speech: Phonetic reduction of two highly frequent words. Language Learning and Development, 10, 308-327. doi:10.1080/15475441.2013.860813.

    Abstract

    In spontaneous conversations between adults, words are often pronounced with fewer segments or syllables than their citation forms. The question arises whether infant-directed speech also contains phonetic reduction. If so, infants would be presented with speech input that enables them to acquire reduced variants from an early age. This study compared speech directed at 11- and 12-month-old infants with adult-directed conversational speech and adult-directed read speech. In an acoustic study, 216 tokens of the Dutch words allemaal and helemaal from speech corpora were analyzed for duration, number of syllables, and vowel quality. In a perception study, adult participants rated these same materials for reduction and provided phonetic transcriptions. The results show that these two words are frequently reduced in infant-directed speech, and that their degree of reduction is comparable with conversational adult-directed speech. These findings suggest that lexical representations for reduced pronunciation variants can be acquired early in linguistic development

    Files private

    Request files
  • Lai, V. T., Curran, T., & Menn, L. (2009). Comprehending conventional and novel metaphors: An ERP study. Brain Research, 1284, 145-155. doi:10.1016/j.brainres.2009.05.088.
  • Lai, C. S. L., Gerrelli, D., Monaco, A. P., Fisher, S. E., & Copp, A. J. (2003). FOXP2 expression during brain development coincides with adult sites of pathology in a severe speech and language disorder. Brain, 126(11), 2455-2462. doi:10.1093/brain/awg247.

    Abstract

    Disruption of FOXP2, a gene encoding a forkhead-domain transcription factor, causes a severe developmental disorder of verbal communication, involving profound articulation deficits, accompanied by linguistic and grammatical impairments. Investigation of the neural basis of this disorder has been limited previously to neuroimaging of affected children and adults. The discovery of the gene responsible, FOXP2, offers a unique opportunity to explore the relevant neural mechanisms from a molecular perspective. In the present study, we have determined the detailed spatial and temporal expression pattern of FOXP2 mRNA in the developing brain of mouse and human. We find expression in several structures including the cortical plate, basal ganglia, thalamus, inferior olives and cerebellum. These data support a role for FOXP2 in the development of corticostriatal and olivocerebellar circuits involved in motor control. We find intriguing concordance between regions of early expression and later sites of pathology suggested by neuroimaging. Moreover, the homologous pattern of FOXP2/Foxp2 expression in human and mouse argues for a role for this gene in development of motor-related circuits throughout mammalian species. Overall, this study provides support for the hypothesis that impairments in sequencing of movement and procedural learning might be central to the FOXP2-related speech and language disorder.
  • Lai, V. T., Garrido Rodriguez, G., & Narasimhan, B. (2014). Thinking-for-speaking in early and late bilinguals. Bilingualism: Language and Cognition, 17, 139-152. doi:10.1017/S1366728913000151.

    Abstract

    When speakers describe motion events using different languages, they subsequently classify those events in language-specific ways (Gennari, Sloman, Malt & Fitch, 2002). Here we ask if bilingual speakers flexibly shift their event classification preferences based on the language in which they verbally encode those events. English–Spanish bilinguals and monolingual controls described motion events in either Spanish or English. Subsequently they judged the similarity of the motion events in a triad task. Bilinguals tested in Spanish and Spanish monolinguals were more likely to make similarity judgments based on the path of motion versus bilinguals tested in English and English monolinguals. The effect is modulated in bilinguals by the age of acquisition of the second language. Late bilinguals based their judgments on path more often when Spanish was used to describe the motion events versus English. Early bilinguals had a path preference independent of the language in use. These findings support “thinking-for-speaking” (Slobin, 1996) in late bilinguals.
  • De Lange, F. P., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Werf, S. P., Van der Meer, J. W. M., & Toni, I. (2004). Neural correlates of the chronic fatigue syndrom: An fMRI study. Brain, 127(9), 1948-1957. doi:10.1093/brain/awh225.

    Abstract

    Chronic fatigue syndrome (CFS) is characterized by a debilitating fatigue of unknown aetiology. Patients who suffer from CFS report a variety of physical complaints as well as neuropsychological complaints. Therefore, it is conceivable that the CNS plays a role in the pathophysiology of CFS. The purpose of this study was to investigate neural correlates of CFS, and specifically whether there exists a linkage between disturbances in the motor system and CFS. We measured behavioural performance and cerebral activity using rapid event-related functional MRI in 16 CFS patients and 16 matched healthy controls while they were engaged in a motor imagery task and a control visual imagery task. CFS patients were considerably slower on performance of both tasks, but the increase in reaction time with increasing task load was similar between the groups. Both groups used largely overlapping neural resources. However, during the motor imagery task, CFS patients evoked stronger responses in visually related structures. Furthermore, there was a marked between-groups difference during erroneous performance. In both groups, dorsal anterior cingulate cortex was specifically activated during error trials. Conversely, ventral anterior cingulate cortex was active when healthy controls made an error, but remained inactive when CFS patients made an error. Our results support the notion that CFS may be associated with dysfunctional motor planning. Furthermore, the between-groups differences observed during erroneous performance point to motivational disturbances as a crucial component of CFS.
  • De Lange, F. P., Koers, A., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Meer, J. W. M., & Toni, I. (2009). Reply to: "Can CBT substantially change grey matter volume in chronic fatigue syndrome" [Letter to the editor]. Brain, 132(6), e111. doi:10.1093/brain/awn208.
  • De Lange, F., Bleijenberg, G., Van der Meer, J. W. M., Hagoort, P., & Toni, I. (2009). Reply: Change in grey matter volume cannot be assumed to be due to cognitive behavioural therapy [Letter to the editor]. Brain, 132(7), e120. doi:10.1093/brain/awn359.
  • De Lange, F. P., Knoop, H., Bleijenberg, G., Van der Meer, J. W. M., Hagoort, P., & Toni, I. (2009). The experience of fatigue in the brain [Letter to the editor]. Psychological Medicine, 39, 523-524. doi:10.1017/S0033291708004844.
  • Lartseva, A., Dijkstra, T., Kan, C. C., & Buitelaar, J. K. (2014). Processing of emotion words by patients with Autism Spectrum Disorders: Evidence from reaction times and EEG. Journal of Autism and Developmental Disorders, 44, 2882-2894. doi:10.1007/s10803-014-2149-z.

    Abstract

    This study investigated processing of emotion words in autism spectrum disorders (ASD) using reaction times and event-related potentials (ERP). Adults with (n = 21) and without (n = 20) ASD performed a lexical decision task on emotion and neutral words while their brain activity was recorded. Both groups showed faster responses to emotion words compared to neutral, suggesting intact early processing of emotion in ASD. In the ERPs, the control group showed a typical late positive component (LPC) at 400-600 ms for emotion words compared to neutral, while the ASD group showed no LPC. The between-group difference in LPC amplitude was significant, suggesting that emotion words were processed differently by individuals with ASD, although their behavioral performance was similar to that of typical individuals
  • Lausberg, H., Cruz, R. F., Kita, S., Zaidel, E., & Ptito, A. (2003). Pantomime to visual presentation of objects: Left hand dyspraxia in patients with complete callosotomy. Brain, 126(2), 343-360. doi:10.1093/brain/awg042.

    Abstract

    Investigations of left hand praxis in imitation and object use in patients with callosal disconnection have yielded divergent results, inducing a debate between two theoretical positions. Whereas Liepmann suggested that the left hemisphere is motor dominant, others maintain that both hemispheres have equal motor competences and propose that left hand apraxia in patients with callosal disconnection is secondary to left hemispheric specialization for language or other task modalities. The present study aims to gain further insight into the motor competence of the right hemisphere by investigating pantomime of object use in split-brain patients. Three patients with complete callosotomy and, as control groups, five patients with partial callosotomy and nine healthy subjects were examined for their ability to pantomime object use to visual object presentation and demonstrate object manipulation. In each condition, 11 objects were presented to the subjects who pantomimed or demonstrated the object use with either hand. In addition, six object pairs were presented to test bimanual coordination. Two independent raters evaluated the videotaped movement demonstrations. While object use demonstrations were perfect in all three groups, the split-brain patients displayed apraxic errors only with their left hands in the pantomime condition. The movement analysis of concept and execution errors included the examination of ipsilateral versus contralateral motor control. As the right hand/left hemisphere performances demonstrated retrieval of the correct movement concepts, concept errors by the left hand were taken as evidence for right hemisphere control. Several types of execution errors reflected a lack of distal motor control indicating the use of ipsilateral pathways. While one split-brain patient controlled his left hand predominantly by ipsilateral pathways in the pantomime condition, the error profile in the other two split-brain patients suggested that the right hemisphere controlled their left hands. In the object use condition, in all three split-brain patients fine-graded distal movements in the left hand indicated right hemispheric control. Our data show left hand apraxia in split-brain patients is not limited to verbal commands, but also occurs in pantomime to visual presentation of objects. As the demonstration with object in hand was unimpaired in either hand, both hemispheres must contain movement concepts for object use. However, the disconnected right hemisphere is impaired in retrieving the movement concept in response to visual object presentation, presumably because of a deficit in associating perceptual object representation with the movement concepts.
  • Lausberg, H., Kita, S., Zaidel, E., & Ptito, A. (2003). Split-brain patients neglect left personal space during right-handed gestures. Neuropsychologia, 41(10), 1317-1329. doi:10.1016/S0028-3932(03)00047-2.

    Abstract

    Since some patients with right hemisphere damage or with spontaneous callosal disconnection neglect the left half of space, it has been suggested that the left cerebral hemisphere predominantly attends to the right half of space. However, clinical investigations of patients having undergone surgical callosal section have not shown neglect when the hemispheres are tested separately. These observations question the validity of theoretical models that propose a left hemispheric specialisation for attending to the right half of space. The present study aims to investigate neglect and the use of space by either hand in gestural demonstrations in three split-brain patients as compared to five patients with partial callosotomy and 11 healthy subjects. Subjects were asked to demonstrate with precise gestures and without speaking the content of animated scenes with two moving objects. The results show that in the absence of primary perceptual or representational neglect, split-brain patients neglect left personal space in right-handed gestural demonstrations. Since this neglect of left personal space cannot be explained by directional or spatial akinesia, it is suggested that it originates at the conceptual level, where the spatial coordinates for right-hand gestures are planned. The present findings are at odds with the position that the separate left hemisphere possesses adequate mechanisms for acting in both halves of space and neglect results from right hemisphere suppression of this potential. Rather, the results provide support for theoretical models that consider the left hemisphere as specialised for processing the right half of space during the execution of descriptive gestures.
  • Lausberg, H., & Kita, S. (2003). The content of the message influences the hand choice in co-speech gestures and in gesturing without speaking. Brain and Language, 86(1), 57-69. doi:10.1016/S0093-934X(02)00534-5.

    Abstract

    The present study investigates the hand choice in iconic gestures that accompany speech. In 10 right-handed subjects gestures were elicited by verbal narration and by silent gestural demonstrations of animations with two moving objects. In both conditions, the left-hand was used as often as the right-hand to display iconic gestures. The choice of the right- or left-hands was determined by semantic aspects of the message. The influence of hemispheric language lateralization on the hand choice in co-speech gestures appeared to be minor. Instead, speaking seemed to induce a sequential organization of the iconic gestures.
  • Lausberg, H., & Sloetjes, H. (2009). Coding gestural behavior with the NEUROGES-ELAN system. Behavior Research Methods, Instruments, & Computers, 41(3), 841-849. doi:10.3758/BRM.41.3.841.

    Abstract

    We present a coding system combined with an annotation tool for the analysis of gestural behavior. The NEUROGES coding system consists of three modules that progress from gesture kinetics to gesture function. Grounded on empirical neuropsychological and psychological studies, the theoretical assumption behind NEUROGES is that its main kinetic and functional movement categories are differentially associated with specific cognitive, emotional, and interactive functions. ELAN is a free, multimodal annotation tool for digital audio and video media. It supports multileveled transcription and complies with such standards as XML and Unicode. ELAN allows gesture categories to be stored with associated vocabularies that are reusable by means of template files. The combination of the NEUROGES coding system and the annotation tool ELAN creates an effective tool for empirical research on gestural behavior.
  • Lemhoefer, K., Schriefers, H., & Indefrey, P. (2014). Idiosyncratic Grammars: Syntactic Processing in Second Language Comprehension Uses Subjective Feature Representations. Journal of Cognitive Neuroscience, 26(7), 1428-1444. doi:10.1162/jocn_a_00609.

    Abstract

    Learning the syntax of a second language (L2) often represents a big challenge to L2 learners. Previous research on syntactic processing in L2 has mainly focused on how L2 speakers respond to "objective" syntactic violations, that is, phrases that are incorrect by native standards. In this study, we investigate how L2 learners, in particular those of less than near-native proficiency, process phrases that deviate from their own, "subjective," and often incorrect syntactic representations, that is, whether they use these subjective and idiosyncratic representations during sentence comprehension. We study this within the domain of grammatical gender in a population of German learners of Dutch, for which systematic errors of grammatical gender are well documented. These L2 learners as well as a control group of Dutch native speakers read Dutch sentences containing gender-marked determinernoun phrases in which gender agreement was either (objectively) correct or incorrect. Furthermore, the noun targets were selected such that, in a high proportion of nouns, objective and subjective correctness would differ for German learners. The ERP results show a syntactic violation effect (P600) for objective gender agreement violations for native, but not for nonnative speakers. However, when the items were re-sorted for the L2 speakers according to subjective correctness (as assessed offline), the P600 effect emerged as well. Thus, rather than being insensitive to violations of gender agreement, L2 speakers are similarly sensitive as native speakers but base their sensitivity on their subjective-sometimes incorrect-representations.

    Files private

    Request files
  • Lensink, S. E., Verdonschot, R. G., & Schiller, N. O. (2014). Morphological priming during language switching: An ERP study. Frontiers in Human Neuroscience, 8: 995. doi:10.3389/fnhum.2014.00995.

    Abstract

    Bilingual language control (BLC) is a much-debated issue in recent literature. Some models assume BLC is achieved by various types of inhibition of the non-target language, whereas other models do not assume any inhibitory mechanisms. In an event-related potential (ERP) study involving a long-lag morphological priming paradigm, participants were required to name pictures and read aloud words in both their L1 (Dutch) and L2 (English). Switch blocks contained intervening L1 items between L2 primes and targets, whereas non-switch blocks contained only L2 stimuli. In non-switch blocks, target picture names that were morphologically related to the primes were named faster than unrelated control items. In switch blocks, faster response latencies were recorded for morphologically related targets as well, demonstrating the existence of morphological priming in the L2. However, only in non-switch blocks, ERP data showed a reduced N400 trend, possibly suggesting that participants made use of a post-lexical checking mechanism during the switch block.
  • Lev-Ari, S., & Peperkamp, S. (2014). An experimental study of the role of social factors in sound change. Laboratory Phonology, 5(3), 379-401. doi:10.1515/lp-2014-0013.

    Abstract

    There is great variation in whether foreign sounds in loanwords are adapted or retained. Importantly, the retention of foreign sounds can lead to a sound change in the language. We propose that social factors influence the likelihood of loanword sound adaptation, and use this case to introduce a novel experimental paradigm for studying language change that captures the role of social factors. Specifically, we show that the relative prestige of the donor language in the loanword's semantic domain influences the rate of sound adaptation. We further show that speakers adapt to the performance of their ‘community’, and that this adaptation leads to the creation of a norm. The results of this study are thus the first to show an effect of social factors on loanword sound adaptation in an experimental setting. Moreover, they open up a new domain of experimentally studying language change in a manner that integrates social factors
  • Lev-Ari, S., & Keysar, B. (2014). Executive control influences linguistic representations. Memory & Cognition, 42(2), 247-263. doi:10.3758/s13421-013-0352-3.

    Abstract

    Although it is known that words acquire their meanings partly from the contexts in which they are used, we proposed that the way in which words are processed can also influence their representation. We further propose that individual differences in the way that words are processed can consequently lead to individual differences in the way that they are represented. Specifically, we showed that executive control influences linguistic representations by influencing the coactivation of competing and reinforcing terms. Consequently, people with poorer executive control perceive the meanings of homonymous terms as being more similar to one another, and those of polysemous terms as being less similar to one another, than do people with better executive control. We also showed that bilinguals with poorer executive control experience greater cross-linguistic interference than do bilinguals with better executive control. These results have implications for theories of linguistic representation and language organization.
  • Lev-Ari, S., San Giacomo, M., & Peperkamp, S. (2014). The effect of domain prestige and interlocutors’ bilingualism on sound adaptation. Journal of Sociolinguistics, 18(5), 658-684. doi:10.1111/josl.12102.

    Abstract

    There is great variability in whether foreign sounds in loanwords are adapted, such that segments show cross-word and cross-situational variation in adaptation. Previous research proposed that word frequency, speakers' level of bilingualism and neighborhoods' level of bilingualism can explain such variability. We test for the effect of these factors and propose two additional factors: interlocutors' level of bilingualism and the prestige of the donor language in the loanword's domain. Analyzing elicited productions of loanwords from Spanish into Mexicano in a village where Spanish and Mexicano enjoy prestige in complementary domains, we show that interlocutors' bilingualism and prestige influence the rate of sound adaptation. Additionally, we find that speakers accommodate to their interlocutors, regardless of the interlocutors' level of bilingualism. As retention of foreign sounds can lead to sound change, these results show that social factors can influence changes in a language's sound system.
  • Lev-Ari, S., & Peperkamp, S. (2014). The influence of inhibitory skill on phonological representations in production and perception. Journal of Phonetics, 47, 36-46. doi:10.1016/j.wocn.2014.09.001.

    Abstract

    Inhibition is known to play a role in speech perception and has been hypothesized to likewise influence speech production. In this paper we test whether individual differences in inhibitory skill can lead to individual differences in phonological representations in perception and production. We further examine whether the type of inhibition that influences phonological representation is domain-specific or domain-general. Native French speakers read aloud sentences with words containing a voiced stop that either have a voicing neighbor (target) or not (control). The duration of pre-voicing was measured. Participants similarly performed a lexical decision task on versions of these target and matched control words whose pre-voicing duration was manipulated. Lastly, participants performed linguistic and non-linguistic inhibition tasks. Results indicate that the lower speakers' linguistic or non-linguistic inhibition is, the easier it is for them to recognize words with a voiceless neighbor when these words have a shorter, intermediate, pre-voicing rather than a longer one. Inhibitory skill did not predict recognition time for control words, indicating that the effect was due to the greater activation of the voiceless neighbor. Inhibition did not predict pre-voicing duration in production. These results indicate that individual differences in cognitive skills can influence phonological representations in speech perception.
  • Levelt, W. J. M., Meyer, A. S., & Roelofs, A. (2004). Relations of lexical access to neural implementation and syntactic encoding [author's response]. Behavioral and Brain Sciences, 27, 299-301. doi:10.1017/S0140525X04270078.

    Abstract

    How can one conceive of the neuronal implementation of the processing model we proposed in our target article? In his commentary (Pulvermüller 1999, reprinted here in this issue), Pulvermüller makes various proposals concerning the underlying neural mechanisms and their potential localizations in the brain. These proposals demonstrate the compatibility of our processing model and current neuroscience. We add further evidence on details of localization based on a recent meta-analysis of neuroimaging studies of word production (Indefrey & Levelt 2000). We also express some minor disagreements with respect to Pulvermüller’s interpretation of the “lemma” notion, and concerning his neural modeling of phonological code retrieval. Branigan & Pickering discuss important aspects of syntactic encoding, which was not the topic of the target article. We discuss their well-taken proposal that multiple syntactic frames for a single verb lemma are represented as independent nodes, which can be shared with other verbs, such as accounting for syntactic priming in speech production. We also discuss how, in principle, the alternative multiple-frame-multiplelemma account can be tested empirically. The available evidence does not seem to support that account.
  • Levelt, W. J. M. (2004). Speech, gesture and the origins of language. European Review, 12(4), 543-549. doi:10.1017/S1062798704000468.

    Abstract

    During the second half of the 19th century, the psychology of language was invented as a discipline for the sole purpose of explaining the evolution of spoken language. These efforts culminated in Wilhelm Wundt’s monumental Die Sprache of 1900, which outlined the psychological mechanisms involved in producing utterances and considered how these mechanisms could have evolved. Wundt assumes that articulatory movements were originally rather arbitrary concomitants of larger, meaningful expressive bodily gestures. The sounds such articulations happened to produce slowly acquired the meaning of the gesture as a whole, ultimately making the gesture superfluous. Over a century later, gestural theories of language origins still abound. I argue that such theories are unlikely and wasteful, given the biological, neurological and genetic evidence.
  • Levelt, W. J. M. (2004). Een huis voor kunst en wetenschap. Boekman: Tijdschrift voor Kunst, Cultuur en Beleid, 16(58/59), 212-215.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M. (1983). Monitoring and self-repair in speech. Cognition, 14, 41-104. doi:10.1016/0010-0277(83)90026-4.

    Abstract

    Making a self-repair in speech typically proceeds in three phases. The first phase involves the monitoring of one’s own speech and the interruption of the flow of speech when trouble is detected. From an analysis of 959 spontaneous self-repairs it appears that interrupting follows detection promptly, with the exception that correct words tend to be completed. Another finding is that detection of trouble improves towards the end of constituents. The second phase is characterized by hesitation, pausing, but especially the use of so-called editing terms. Which editing term is used depends on the nature of the speech trouble in a rather regular fashion: Speech errors induce other editing terms than words that are merely inappropriate, and trouble which is detected quickly by the speaker is preferably signalled by the use of ‘uh’. The third phase consists of making the repair proper The linguistic well-formedness of a repair is not dependent on the speaker’s respecting the integriv of constituents, but on the structural relation between original utterance and repair. A bi-conditional well-formedness rule links this relation to a corresponding relation between the conjuncts of a coordination. It is suggested that a similar relation holds also between question and answer. In all three cases the speaker respects certain Istructural commitments derived from an original utterance. It was finally shown that the editing term plus the first word of the repair proper almost always contain sufficient information for the listener to decide how the repair should be related to the original utterance. Speakers almost never produce misleading information in this respect. It is argued that speakers have little or no access to their speech production process; self-monitoring is probably based on parsing one’s own inner or overt speech.
  • Levelt, W. J. M. (1982). Het lineariseringsprobleem van de spreker. Tijdschrift voor Taal- en Tekstwetenschap (TTT), 2(1), 1-15.
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M. (1988). Onder sociale wetenschappen. Mededelingen van de Afdeling Letterkunde, 51(2), 41-55.
  • Levelt, W. J. M., & Cutler, A. (1983). Prosodic marking in speech repair. Journal of semantics, 2, 205-217. doi:10.1093/semant/2.2.205.

    Abstract

    Spontaneous self-corrections in speech pose a communication problem; the speaker must make clear to the listener not only that the original Utterance was faulty, but where it was faulty and how the fault is to be corrected. Prosodic marking of corrections - making the prosody of the repair noticeably different from that of the original utterance - offers a resource which the speaker can exploit to provide the listener with such information. A corpus of more than 400 spontaneous speech repairs was analysed, and the prosodic characteristics compared with the syntactic and semantic characteristics of each repair. Prosodic marking showed no relationship at all with the syntactic characteristics of repairs. Instead, marking was associated with certain semantic factors: repairs were marked when the original utterance had been actually erroneous, rather than simply less appropriate than the repair; and repairs tended to be marked more often when the set of items encompassing the error and the repair was small rather than when it was large. These findings lend further weight to the characterization of accent as essentially semantic in function.
  • Levelt, W. J. M. (1984). Sprache und Raum. Texten und Schreiben, 20, 18-21.
  • Levelt, W. J. M., & Kelter, S. (1982). Surface form and memory in question answering. Cognitive Psychology, 14, 78-106. doi:10.1016/0010-0285(82)90005-6.

    Abstract

    Speakers tend to repeat materials from previous talk. This tendency is experimentally established and manipulated in various question-answering situations. It is shown that a question's surface form can affect the format of the answer given, even if this form has little semantic or conversational consequence, as in the pair Q: (At) what time do you close. A: “(At)five o'clock.” Answerers tend to match the utterance to the prepositional (nonprepositional) form of the question. This “correspondence effect” may diminish or disappear when, following the question, additional verbal material is presented to the answerer. The experiments show that neither the articulatory buffer nor long-term memory is normally involved in this retention of recent speech. Retaining recent speech in working memory may fulfill a variety of functions for speaker and listener, among them the correct production and interpretation of surface anaphora. Reusing recent materials may, moreover, be more economical than regenerating speech anew from a semantic base, and thus contribute to fluency. But the realization of this strategy requires a production system in which linguistic formulation can take place relatively independent of, and parallel to, conceptual planning.
  • Levelt, W. J. M. (1982). Science policy: Three recent idols, and a goddess. IPO Annual Progress Report, 17, 32-35.
  • Levelt, W. J. M., Richardson, G., & La Heij, W. (1985). Pointing and voicing in deictic expressions. Journal of Memory and Language, 24, 133-164. doi:10.1016/0749-596X(85)90021-X.

    Abstract

    The present paper studies how, in deictic expressions, the temporal interdependency of speech and gesture is realized in the course of motor planning and execution. Two theoretical positions were compared. On the “interactive” view the temporal parameters of speech and gesture are claimed to be the result of feedback between the two systems throughout the phases of motor planning and execution. The alternative “ballistic” view, however, predicts that the two systems are independent during the phase of motor execution, the temporal parameters having been preestablished in the planning phase. In four experiments subjects were requested to indicate which of an array of referent lights was momentarily illuminated. This was done by pointing to the light and/or by using a deictic expression (this/that light). The temporal and spatial course of the pointing movement was automatically registered by means of a Selspot opto-electronic system. By analyzing the moments of gesture initiation and apex, and relating them to the moments of speech onset, it was possible to show that, for deictic expressions, the ballistic view is very nearly correct.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levelt, W. J. M. (1983). Wetenschapsbeleid: Drie actuele idolen en een godin. Grafiet, 1(4), 178-184.
  • Levelt, W. J. M. (1993). Timing in speech production with special reference to word form encoding. Annals of the New York Academy of Sciences, 682, 283-295. doi:10.1111/j.1749-6632.1993.tb22976.x.
  • Levelt, W. J. M. (1982). Zelfcorrecties in het spreekproces. KNAW: Mededelingen van de afdeling letterkunde, nieuwe reeks, 45(8), 215-228.
  • Levelt, W. J. M. (1979). On learnability: A reply to Lasnik and Chomsky. Unpublished manuscript.
  • Levinson, S. C., & Brown, P. (1993). Background to "Immanuel Kant among the Tenejapans". Anthropology Newsletter, 34(3), 22-23. doi:10.1111/an.1993.34.3.22.
  • Levinson, S. C. (1979). Activity types and language. Linguistics, 17, 365-399.
  • Levinson, S. C., & Majid, A. (2014). Differential ineffability and the senses. Mind & Language, 29, 407-427. doi:10.1111/mila.12057.

    Abstract

    neffability, the degree to which percepts or concepts resist linguistic coding, is a fairly unexplored nook of cognitive science. Although philosophical preoccupations with qualia or nonconceptual content certainly touch upon the area, there has been little systematic thought and hardly any empirical work in recent years on the subject. We argue that ineffability is an important domain for the cognitive sciences. For examining differential ineffability across the senses may be able to tell us important things about how the mind works, how different modalities talk to one another, and how language does, or does not, interact with other mental faculties.
  • Levinson, S. C., & Brown, P. (2003). Emmanuel Kant chez les Tenejapans: L'Anthropologie comme philosophie empirique [Translated by Claude Vandeloise for 'Langues et Cognition']. Langues et Cognition, 239-278.

    Abstract

    This is a translation of Levinson and Brown (1994).
  • Levinson, S. C., & Meira, S. (2003). 'Natural concepts' in the spatial topological domain - adpositional meanings in crosslinguistic perspective: An exercise in semantic typology. Language, 79(3), 485-516.

    Abstract

    Most approaches to spatial language have assumed that the simplest spatial notions are (after Piaget) topological and universal (containment, contiguity, proximity, support, represented as semantic primitives suchas IN, ON, UNDER, etc.). These concepts would be coded directly in language, above all in small closed classes suchas adpositions—thus providing a striking example of semantic categories as language-specific projections of universal conceptual notions. This idea, if correct, should have as a consequence that the semantic categories instantiated in spatial adpositions should be essentially uniform crosslinguistically. This article attempts to verify this possibility by comparing the semantics of spatial adpositions in nine unrelated languages, with the help of a standard elicitation procedure, thus producing a preliminary semantic typology of spatial adpositional systems. The differences between the languages turn out to be so significant as to be incompatible withstronger versions of the UNIVERSAL CONCEPTUAL CATEGORIES hypothesis. Rather, the language-specific spatial adposition meanings seem to emerge as compact subsets of an underlying semantic space, withcertain areas being statistical ATTRACTORS or FOCI. Moreover, a comparison of systems withdifferent degrees of complexity suggests the possibility of positing implicational hierarchies for spatial adpositions. But such hierarchies need to be treated as successive divisions of semantic space, as in recent treatments of basic color terms. This type of analysis appears to be a promising approachfor future work in semantic typology.
  • Levinson, S. C. (2014). Language and Wallace's problem [Review of the books More than nature needs: Language, mind and evolution by D. Bickerton and A natural history of human thinking by M. Tomasello]. Science, 344, 1458-1459. doi:10.1126/science.1252988.
  • Levinson, S. C., & Burenhult, N. (2009). Semplates: A new concept in lexical semantics? Language, 85, 153-174. doi:10.1353/lan.0.0090.

    Abstract

    This short report draws attention to an interesting kind of configuration in the lexicon that seems to have escaped theoretical or systematic descriptive attention. These configurations, which we dub SEMPLATES, consist of an abstract structure or template, which is recurrently instantiated in a number of lexical sets, typically of different form classes. A number of examples from different language families are adduced, and generalizations made about the nature of semplates, which are contrasted to other, perhaps similar, phenomena
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C., & Holler, J. (2014). The origin of human multi-modal communication. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 2013030. doi:10.1098/rstb.2013.0302.

    Abstract

    One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins—especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the ‘gesture-first hypothesis’ with that of gesture and speech having evolved together, hand in hand—or hand in mouth, rather—as one system.
  • Levy, J., Hagoort, P., & Démonet, J.-F. (2014). A neuronal gamma oscillatory signature during morphological unification in the left occipitotemporal junction. Human Brain Mapping, 35, 5847-5860. doi:10.1002/hbm.22589.

    Abstract

    Morphology is the aspect of language concerned with the internal structure of words. In the past decades, a large body of masked priming (behavioral and neuroimaging) data has suggested that the visual word recognition system automatically decomposes any morphologically complex word into a stem and its constituent morphemes. Yet the reliance of morphology on other reading processes (e.g., orthography and semantics), as well as its underlying neuronal mechanisms are yet to be determined. In the current magnetoencephalography study, we addressed morphology from the perspective of the unification framework, that is, by applying the Hold/Release paradigm, morphological unification was simulated via the assembly of internal morphemic units into a whole word. Trials representing real words were divided into words with a transparent (true) or a nontransparent (pseudo) morphological relationship. Morphological unification of truly suffixed words was faster and more accurate and additionally enhanced induced oscillations in the narrow gamma band (60–85 Hz, 260–440 ms) in the left posterior occipitotemporal junction. This neural signature could not be explained by a mere automatic lexical processing (i.e., stem perception), but more likely it related to a semantic access step during the morphological unification process. By demonstrating the validity of unification at the morphological level, this study contributes to the vast empirical evidence on unification across other language processes. Furthermore, we point out that morphological unification relies on the retrieval of lexical semantic associations via induced gamma band oscillations in a cerebral hub region for visual word form processing.
  • Lewis, A., Freeman-Mills, L., de la Calle-Mustienes, E., Giráldez-Pérez, R. M., Davis, H., Jaeger, E., Becker, M., Hubner, N. C., Nguyen, L. N., Zeron-Medina, J., Bond, G., Stunnenberg, H. G., Carvajal, J. J., Gomez-Skarmeta, J. L., Leedham, S., & Tomlinson, I. (2014). A polymorphic enhancer near GREM1 influences bowel cancer risk through diifferential CDX2 and TCF7L2 binding. Cell Reports, 8(4), Pages 983-990. doi:10.1016/j.celrep.2014.07.020.

    Abstract

    A rare germline duplication upstream of the bone morphogenetic protein antagonist GREM1 causes a Mendelian-dominant predisposition to colorectal cancer (CRC). The underlying disease mechanism is strong, ectopic GREM1 overexpression in the intestinal epithelium. Here, we confirm that a common GREM1 polymorphism, rs16969681, is also associated with CRC susceptibility, conferring ∼20% differential risk in the general population. We hypothesized the underlying cause to be moderate differences in GREM1 expression. We showed that rs16969681 lies in a region of active chromatin with allele- and tissue-specific enhancer activity. The CRC high-risk allele was associated with stronger gene expression, and higher Grem1 mRNA levels increased the intestinal tumor burden in ApcMin mice. The intestine-specific transcription factor CDX2 and Wnt effector TCF7L2 bound near rs16969681, with significantly higher affinity for the risk allele, and CDX2 overexpression in CDX2/GREM1-negative cells caused re-expression of GREM1. rs16969681 influences CRC risk through effects on Wnt-driven GREM1 expression in colorectal tumors.
  • Liljeström, M., Hulten, A., Parkkonen, L., & Salmelin, R. (2009). Comparing MEG and fMRI views to naming actions and objects. Human Brain Mapping, 30, 1845-1856. doi:10.1002/hbm.20785.

    Abstract

    Most neuroimaging studies are performed using one imaging method only, either functional magnetic resonance imaging (fMRI), electroencephalography (EEG), or magnetoencephalography (MEG). Information on both location and timing has been sought by recording fMRI and EEG, simultaneously, or MEG and fMRI in separate sessions. Such approaches assume similar active areas whether detected via hemodynamic or electrophysiological signatures. Direct comparisons, after independent analysis of data from each imaging modality, have been conducted primarily on low-level sensory processing. Here, we report MEG (timing and location) and fMRI (location) results in 11 subjects when they named pictures that depicted an action or an object. The experimental design was exactly the same for the two imaging modalities. The MEG data were analyzed with two standard approaches: a set of equivalent current dipoles and a distributed minimum norm estimate. The fMRI blood-oxygenlevel dependent (BOLD) data were subjected to the usual random-effect contrast analysis. At the group level, MEG and fMRI data showed fairly good convergence, with both overall activation patterns and task effects localizing to comparable cortical regions. There were some systematic discrepancies, however, and the correspondence was less compelling in the individual subjects. The present analysis should be helpful in reconciling results of fMRI and MEG studies on high-level cognitive functions
  • Liszkowski, U., Schäfer, M., Carpenter, M., & Tomasello, M. (2009). Prelinguistic infants, but not chimpanzees, communicate about absent entities. Psychological Science, 20, 654-660.

    Abstract

    One of the defining features of human language is displacement, the ability to make reference to absent entities. Here we show that prelinguistic, 12-month-old infants already can use a nonverbal pointing gesture to make reference to absent entities. We also show that chimpanzees—who can point for things they want humans to give them—do not point to refer to absent entities in the same way. These results demonstrate that the ability to communicate about absent but mutually known entities depends not on language, but rather on deeper social-cognitive skills that make acts of linguistic reference possible in the first place. These nonlinguistic skills for displaced reference emerged apparently only after humans' divergence from great apes some 6 million years ago.
  • Liszkowski, U., Carpenter, M., Henning, A., Striano, T., & Tomasello, M. (2004). Twelve-month-olds point to share attention and interest. Developmental Science, 7(3), 297-307. doi:10.1111/j.1467-7687.2004.00349.x.

    Abstract

    Infants point for various motives. Classically, one such motive is declarative, to share attention and interest with adults to events. Recently, some researchers have questioned whether infants have this motivation. In the current study, an adult reacted to 12-month-olds' pointing in different ways, and infants' responses were observed. Results showed that when the adult shared attention and interest (i.e. alternated gaze and emoted), infants pointed more frequently across trials and tended to prolong each point – presumably to prolong the satisfying interaction. However, when the adult emoted to the infant alone or looked only to the event, infants pointed less across trials and repeated points more within trials – presumably in an attempt to establish joint attention. Results suggest that 12-month-olds point declaratively and understand that others have psychological states that can be directed and shared.
  • Liszkowski, U. (2014). Two sources of meaning in infant communication: Preceding action contexts and act-accompanying characteristics. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 20130294. doi:10.1098/rstb.2013.0294.
  • Littauer, R., Roberts, S. G., Winters, J., Bailes, R., Pleyer, M., & Little, H. (2014). From the savannah to the cloud: Blogging evolutionary linguistics research. The Past, Present and Future of Language Evolution Research: Student Volume of the 9th International Conference on the Evolution of Language, 121-133.

    Abstract

    Over the last thirty years, evolutionary linguistics has grown as a data-driven, interdisciplinary field and received accelerated interest due to its adoption of modern research methodologies. This growth is dependant upon the methods used to both disseminate and foster discussion of research by the larger academic community. We argue that the internet is increasingly being used as an efficient means of finding and presenting research. The traditional journal format for disseminating knowledge was well-designed within the confines of print publication. With the tools afforded to us by technology and the internet, the evolutionary linguistics research community is able to compensate for the necessary shortcomings of the journal format. We evaluate examples of how research blogging has aided language scientists. We review the state of the field for online, real-time academic debate, by covering particular instances of post- publication review and their reaction. We conclude by considering how evolutionary linguistics as a field can potentially benefit from using the internet
  • Liu, C., Kong, X., Liu, X., Zhou, R., & Wu, B. (2014). Long-term total sleep deprivation reduces thalamic gray matter volume in healthy men. NeuroReport, 25(5), 320-323. doi:10.1097/WNR.0000000000000091.

    Abstract

    Sleep loss can alter extrinsic, task-related functional MRI signals involved in attention, memory, and executive function. However, the effects of sleep loss on brain structure have not been well characterized. Recent studies with patients with sleep disorders and animal models have demonstrated reduction of regional brain structure in the hippocampus and thalamus. In this study, using T1-weighted MRI, we examined the change of regional gray matter volume in healthy adults after long-term total sleep deprivation (∼72 h). Regional volume changes were explored using voxel-based morphometry with a paired two-sample t-test. The results revealed significant loss of gray matter volume in the thalamus but not in the hippocampus. No overall decrease in whole brain gray matter volume was noted after sleep deprivation. As expected, sleep deprivation significantly reduced visual vigilance as assessed by the continuous performance test, and this decrease was correlated significantly with reduced regional gray matter volume in thalamic regions. This study provides the first evidence for sleep loss-related changes in gray matter in the healthy adult brain.
  • Lohmann, A., & Takada, T. (2014). Order in NP conjuncts in spoken English and Japanese. Lingua, 152, 48-64. doi:10.1016/j.lingua.2014.09.011.

    Abstract

    In the emerging field of cross-linguistic studies on language production, one particularly interesting line of inquiry is possible differences between English and Japanese in ordering words and phrases. Previous research gives rise to the idea that there is a difference in accessing meaning versus form during linearization between these two languages. This assumption is based on observations of language-specific effects of the length factor on the order of phrases (short-before-long in English, long-before-short in Japanese). We contribute to the cross-linguistic exploration of such differences by investigating the variables underlying the internal order of NP conjuncts in spoken English and Japanese. Our quantitative analysis shows that similar influences underlie the ordering process across the two languages. Thus we do not find evidence for the aforementioned difference in accessing meaning versus form with this syntactic phenomenon. With regard to length, Japanese also exhibits a short-before-long preference. However, this tendency is significantly weaker in Japanese than in English, which we explain through an attenuating influence of the typical Japanese phrase structure pattern on the universal effect of short phrases being more accessible. We propose that a similar interaction between entrenched long-before-short schemas and universal accessibility effects is responsible for the varying effects of length in Japanese.
  • Loo, S. K., Fisher, S. E., Francks, C., Ogdie, M. N., MacPhie, I. L., Yang, M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2004). Genome-wide scan of reading ability in affected sibling pairs with attention-deficit/hyperactivity disorder: Unique and shared genetic effects. Molecular Psychiatry, 9, 485-493. doi:10.1038/sj.mp.4001450.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD) and reading disability (RD) are common highly heritable disorders of childhood, which frequently co-occur. Data from twin and family studies suggest that this overlap is, in part, due to shared genetic underpinnings. Here, we report the first genome-wide linkage analysis of measures of reading ability in children with ADHD, using a sample of 233 affected sibling pairs who previously participated in a genome-wide scan for susceptibility loci in ADHD. Quantitative trait locus (QTL) analysis of a composite reading factor defined from three highly correlated reading measures identified suggestive linkage (multipoint maximum lod score, MLS>2.2) in four chromosomal regions. Two regions (16p, 17q) overlap those implicated by our previous genome-wide scan for ADHD in the same sample: one region (2p) provides replication for an RD susceptibility locus, and one region (10q) falls approximately 35 cM from a modestly highlighted region in an independent genome-wide scan of siblings with ADHD. Investigation of an individual reading measure of Reading Recognition supported linkage to putative RD susceptibility regions on chromosome 8p (MLS=2.4) and 15q (MLS=1.38). Thus, the data support the existence of genetic factors that have pleiotropic effects on ADHD and reading ability--as suggested by shared linkages on 16p, 17q and possibly 10q--but also those that appear to be unique to reading--as indicated by linkages on 2p, 8p and 15q that coincide with those previously found in studies of RD. Our study also suggests that reading measures may represent useful phenotypes in ADHD research. The eventual identification of genes underlying these unique and shared linkages may increase our understanding of ADHD, RD and the relationship between the two.
  • Lundstrom, B. N., Petersson, K. M., Andersson, J., Johansson, M., Fransson, P., & Ingvar, M. (2003). Isolating the retrieval of imagined pictures during episodic memory: Activation of the left precuneus and left prefrontal cortex. Neuroimage, 20, 1934-1943. doi:10.1016/j.neuroimage.2003.07.017.

    Abstract

    The posterior medial parietal cortex and the left prefrontal cortex have both been implicated in the recollection of past episodes. In order to clarify their functional significance, we performed this functional magnetic resonance imaging study, which employed event-related source memory and item recognition retrieval of words paired with corresponding imagined or viewed pictures. Our results suggest that episodic source memory is related to a functional network including the posterior precuneus and the left lateral prefrontal cortex. This network is activated during explicit retrieval of imagined pictures and results from the retrieval of item-context associations. This suggests that previously imagined pictures provide a context with which encoded words can be more strongly associated.
  • Lüttjohann, A., Schoffelen, J.-M., & Van Luijtelaar, G. (2014). Termination of ongoing spike-wave discharges investigated by cortico-thalamic network analyses. Neurobiology of Disease, 70, 127-137. doi:10.1016/j.nbd.2014.06.007.

    Abstract

    Purpose While decades of research were devoted to study generation mechanisms of spontaneous spike and wave discharges (SWD), little attention has been paid to network mechanisms associated with the spontaneous termination of SWD. In the current study coupling-dynamics at the onset and termination of SWD were studied in an extended part of the cortico-thalamo-cortical system of freely moving, genetic absence epileptic WAG/Rij rats. Methods Local-field potential recordings of 16 male WAG/Rij rats, equipped with multiple electrodes targeting layer 4 to 6 of the somatosensory-cortex (ctx4, ctx5, ctx6), rostral and caudal reticular thalamic nucleus (rRTN & cRTN), Ventral Postero Medial (VPM), anterior- (ATN) and posterior (Po) thalamic nucleus, were obtained. Six seconds lasting pre-SWD->SWD, SWD->post SWD and control periods were analyzed with time-frequency methods and between-region interactions were quantified with frequencyresolved Granger Causality (GC) analysis. Results Most channel-pairs showed increases in GC lasting from onset to offset of the SWD. While for most thalamo-thalamic pairs a dominant coupling direction was found during the complete SWD, most cortico-thalamic pairs only showed a dominant directional drive (always from cortex to thalamus) during the first 500ms of SWD. Channel-pair ctx4-rRTN showed a longer lasting dominant cortical drive, which stopped 1.5 sec prior to SWD offset. This early decrease in directional coupling was followed by an increase in directional coupling from cRTN to rRTN 1 sec prior to SWD offset. For channel pairs ctx5-Po and ctx6-Po the heightened cortex->thalamus coupling remained until 1.5 sec following SWD offset, while the thalamus->cortex coupling for these pairs stopped at SWD offset. Conclusion The high directional coupling from somatosensory cortex to the thalamus at SWD onset is in good agreement with the idea of a cortical epileptic focus that initiates and entrains other brain structures into seizure activity. The decrease of cortex to rRTN coupling as well as the increased coupling from cRTN to rRTN preceding SWD termination demonstrate that SWD termination is a gradual process that involves both cortico-thalamic as well as intrathalamic processes. The rostral RTN seems to be an important resonator for SWD and relevant for maintenance, while the cRTN might inhibit this oscillation. The somatosensory cortex seems to attempt to reinitiate SWD following its offset via its strong coupling to the posterior thalamus.
  • Mace, R., Jordan, F., & Holden, C. (2003). Testing evolutionary hypotheses about human biological adaptation using cross-cultural comparison. Comparative Biochemistry and Physiology A-Molecular & Integrative Physiology, 136(1), 85-94. doi:10.1016/S1095-6433(03)00019-9.

    Abstract

    Physiological data from a range of human populations living in different environments can provide valuable information for testing evolutionary hypotheses about human adaptation. By taking into account the effects of population history, phylogenetic comparative methods can help us determine whether variation results from selection due to particular environmental variables. These selective forces could even be due to cultural traits-which means that gene-culture co-evolution may be occurring. In this paper, we outline two examples of the use of these approaches to test adaptive hypotheses that explain global variation in two physiological traits: the first is lactose digestion capacity in adults, and the second is population sex-ratio at birth. We show that lower than average sex ratio at birth is associated with high fertility, and argue that global variation in sex ratio at birth has evolved as a response to the high physiological costs of producing boys in high fertility populations.
  • Magi, A., Tattini, L., Palombo, F., Benelli, M., Gialluisi, A., Giusti, B., Abbate, R., Seri, M., Gensini, G. F., Romeo, G., & Pippucci, T. (2014). H3M2: Detection of runs of homozygosity from whole-exome sequencing data. Bioinformatics, 2852-2859. doi:10.1093/bioinformatics/btu401.

    Abstract

    Motivation: Runs of homozygosity (ROH) are sizable chromosomal stretches of homozygous genotypes, ranging in length from tens of kilobases to megabases. ROHs can be relevant for population and medical genetics, playing a role in predisposition to both rare and common disorders. ROHs are commonly detected by single nucleotide polymorphism (SNP) microarrays, but attempts have been made to use whole-exome sequencing (WES) data. Currently available methods developed for the analysis of uniformly spaced SNP-array maps do not fit easily to the analysis of the sparse and non-uniform distribution of the WES target design. Results: To meet the need of an approach specifically tailored to WES data, we developed (HM2)-M-3, an original algorithm based on heterogeneous hidden Markov model that incorporates inter-marker distances to detect ROH from WES data. We evaluated the performance of H-3 M-2 to correctly identify ROHs on synthetic chromosomes and examined its accuracy in detecting ROHs of different length (short, medium and long) from real 1000 genomes project data. H3M2 turned out to be more accurate than GERMLINE and PLINK, two state-of-the-art algorithms, especially in the detection of short and medium ROHs
  • Magnuson, J. S., Tanenhaus, M. K., Aslin, R. N., & Dahan, D. (2003). The time course of spoken word learning and recognition: Studies with artificial lexicons. Journal of Experimental Psychology: General, 132(2), 202-227. doi:10.1037/0096-3445.132.2.202.

    Abstract

    The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.
  • Magyari, L., Bastiaansen, M. C. M., De Ruiter, J. P., & Levinson, S. C. (2014). Early anticipation lies behind the speed of response in conversation. Journal of Cognitive Neuroscience, 26(11), 2530-2539. doi:10.1162/jocn_a_00673.

    Abstract

    RTs in conversation, with average gaps of 200 msec and often less, beat standard RTs, despite the complexity of response and the lag in speech production (600 msec or more). This can only be achieved by anticipation of timing and content of turns in conversation, about which little is known. Using EEG and an experimental task with conversational stimuli, we show that estimation of turn durations are based on anticipating the way the turn would be completed. We found a neuronal correlate of turn-end anticipation localized in ACC and inferior parietal lobule, namely a beta-frequency desynchronization as early as 1250 msec, before the end of the turn. We suggest that anticipation of the other's utterance leads to accurately timed transitions in everyday conversations.
  • Magyari, L. (2003). Mit ne gondoljunk az állatokról? [What not to think about animals?] [Review of the book Wild Minds: What animals really think by M. Hauser]. Magyar Pszichológiai Szemle (Hungarian Psychological Review), 58(3), 417-424. doi:10.1556/MPSzle.58.2003.3.5.
  • Magyari, L. (2004). Nyelv és/vagy evolúció? [Book review]. Magyar Pszichológiai Szemle, 59(4), 591-607. doi:10.1556/MPSzle.59.2004.4.7.

    Abstract

    Nyelv és/vagy evolúció: Lehetséges-e a nyelv evolúciós magyarázata? [Derek Bickerton: Nyelv és evolúció] (Magyari Lilla); Történelmi olvasókönyv az agyról [Charles G. Gross: Agy, látás, emlékezet. Mesék az idegtudomány történetéből] (Garab Edit Anna); Művészet vagy tudomány [Margitay Tihamér: Az érvelés mestersége. Érvelések elemzése, értékelése és kritikája] (Zemplén Gábor); Tényleg ésszerűek vagyunk? [Herbert Simon: Az ésszerűség szerepe az emberi életben] (Kardos Péter); Nemi különbségek a megismerésben [Doreen Kimura: Női agy, férfi agy]. (Hahn Noémi);
  • Majid, A. (2004). Out of context. The Psychologist, 17(6), 330-330.
  • Majid, A. (2003). Towards behavioural genomics. The Psychologist, 16(6), 298-298.
  • Majid, A. (2004). Data elicitation methods. Language Archive Newsletter, 1(2), 6-6.
  • Majid, A. (2004). Developing clinical understanding. The Psychologist, 17, 386-387.
  • Majid, A. (2004). Coned to perfection. The Psychologist, 17(7), 386-386.
  • Majid, A., Bowerman, M., Kita, S., Haun, D. B. M., & Levinson, S. C. (2004). Can language restructure cognition? The case for space. Trends in Cognitive Sciences, 8(3), 108-114. doi:10.1016/j.tics.2004.01.003.

    Abstract

    Frames of reference are coordinate systems used to compute and specify the location of objects with respect to other objects. These have long been thought of as innate concepts, built into our neurocognition. However, recent work shows that the use of such frames in language, cognition and gesture varies crossculturally, and that children can acquire different systems with comparable ease. We argue that language can play a significant role in structuring, or restructuring, a domain as fundamental as spatial cognition. This suggests we need to rethink the relation between the neurocognitive underpinnings of spatial cognition and the concepts we use in everyday thinking, and, more generally, to work out how to account for cross-cultural cognitive diversity in core cognitive domains.
  • Majid, A. (2004). An integrated view of cognition [Review of the book Rethinking implicit memory ed. by J. S. Bowers and C. J. Marsolek]. The Psychologist, 17(3), 148-149.
  • Majid, A. (2004). [Review of the book The new handbook of language and social psychology ed. by W. Peter Robinson and Howard Giles]. Language and Society, 33(3), 429-433.
  • Majid, A. (2003). Into the deep. The Psychologist, 16(6), 300-300.
  • Majid, A., & Burenhult, N. (2014). Odors are expressible in language, as long as you speak the right language. Cognition, 130(2), 266-270. doi:10.1016/j.cognition.2013.11.004.

    Abstract

    From Plato to Pinker there has been the common belief that the experience of a smell is impossible to put into words. Decades of studies have confirmed this observation. But the studies to date have focused on participants from urbanized Western societies. Cross-cultural research suggests that there may be other cultures where odors play a larger role. The Jahai of the Malay Peninsula are one such group. We tested whether Jahai speakers could name smells as easily as colors in comparison to a matched English group. Using a free naming task we show on three different measures that Jahai speakers find it as easy to name odors as colors, whereas English speakers struggle with odor naming. Our findings show that the long-held assumption that people are bad at naming smells is not universally true. Odors are expressible in language, as long as you speak the right language.
  • Malt, B. C., Ameel, E., Imai, M., Gennari, S., Saji, N., & Majid, A. (2014). Human locomotion in languages: Constraints on moving and meaning. Journal of Memory and Language, 74, 107-123. doi:10.1016/j.jml.2013.08.003.

    Abstract

    The distinctions between red and yellow or arm and hand may seem self-evident to English speakers, but they are not: Languages differ in the named distinctions they make. To help understand what constrains word meaning and how variation arises, we examined name choices in English, Dutch, Spanish, and Japanese for 36 instances of human locomotion. Naming patterns showed commonalities largely interpretable in terms of perceived physical similarities among the instances. There was no evidence for languages jointly ignoring salient physical distinctions to build meaning on other bases, nor for a shift in the basis of word meanings between parts of the domain of more vs. less importance to everyday life. Overall, the languages differed most notably in how many named distinctions they made, a form of variation that may be linked to linguistic typology. These findings, considered along with naming patterns from other domains, suggest recurring principles of constraint and variation across domains.
  • Mangione-Smith, R., Elliott, M. N., Stivers, T., McDonald, L., Heritage, J., & McGlynn, E. A. (2004). Racial/ethnic variation in parent expectations for antibiotics: Implications for public health campaigns. Pediatrics, 113(5), 385-394.
  • Mangione-Smith, R., Stivers, T., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Online commentary during the physical examination: A communication tool for avoiding inappropriate antibiotic prescribing? Social Science and Medicine, 56(2), 313-320.
  • Mani, N., & Huettig, F. (2014). Word reading skill predicts anticipation of upcoming spoken language input: A study of children developing proficiency in reading. Journal of Experimental Child Psychology, 126, 264-279. doi:10.1016/j.jecp.2014.05.004.

    Abstract

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants’ literacy skills. Against this background, the current study takes a look at the role of word reading skill in listener’s anticipation of upcoming spoken language input in children at the cusp of learning to read: if reading skills impact predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-old children on their prediction of upcoming spoken language input in an eye-tracking task. While children, like in previous studies to-date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children’s word reading (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition) skills and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations which in turn also supports anticipation of upcoming spoken words.
  • Marcus, G. F., & Fisher, S. E. (2003). FOXP2 in focus: What can genes tell us about speech and language? Trends in Cognitive Sciences, 7, 257-262. doi:10.1016/S1364-6613(03)00104-9.

    Abstract

    The human capacity for acquiring speech and language must derive, at least in part, from the genome. In 2001, a study described the first case of a gene, FOXP2, which is thought to be implicated in our ability to acquire spoken language. In the present article, we discuss how this gene was discovered, what it might do, how it relates to other genes, and what it could tell us about the nature of speech and language development. We explain how FOXP2 could, without being specific to the brain or to our own species, still provide an invaluable entry-point into understanding the genetic cascades and neural pathways that contribute to our capacity for speech and language.
  • Marlow, A. J., Fisher, S. E., Francks, C., MacPhie, I. L., Cherny, S. S., Richardson, A. J., Talcott, J. B., Stein, J. F., Monaco, A. P., & Cardon, L. R. (2003). Use of multivariate linkage analysis for dissection of a complex cognitive trait. American Journal of Human Genetics, 72(3), 561-570. doi:10.1086/368201.

    Abstract

    Replication of linkage results for complex traits has been exceedingly difficult, owing in part to the inability to measure the precise underlying phenotype, small sample sizes, genetic heterogeneity, and statistical methods employed in analysis. Often, in any particular study, multiple correlated traits have been collected, yet these have been analyzed independently or, at most, in bivariate analyses. Theoretical arguments suggest that full multivariate analysis of all available traits should offer more power to detect linkage; however, this has not yet been evaluated on a genomewide scale. Here, we conduct multivariate genomewide analyses of quantitative-trait loci that influence reading- and language-related measures in families affected with developmental dyslexia. The results of these analyses are substantially clearer than those of previous univariate analyses of the same data set, helping to resolve a number of key issues. These outcomes highlight the relevance of multivariate analysis for complex disorders for dissection of linkage results in correlated traits. The approach employed here may aid positional cloning of susceptibility genes in a wide spectrum of complex traits.
  • Martin, A. E., Nieuwland, M. S., & Carrieras, M. (2014). Agreement attraction during comprehension of grammatical sentences: ERP evidence from ellipsis. Brain and Language, 135, 42-51. doi:10.1016/j.bandl.2014.05.001.

    Abstract

    Successful dependency resolution during language comprehension relies on accessing certain representations in memory, and not others. We recently reported event-related potential (ERP) evidence that syntactically unavailable, intervening attractor-nouns interfered during comprehension of Spanish noun-phrase ellipsis (the determiner otra/otro): grammatically correct determiners that mismatched the gender of attractor-nouns elicited a sustained negativity as also observed for incorrect determiners (Martin, Nieuwland, & Carreiras, 2012). The current study sought to extend this novel finding in sentences containing object-extracted relative clauses, where the antecedent may be less prominent. Whereas correct determiners that matched the gender of attractor-nouns now elicited an early anterior negativity as also observed for mismatching determiners, the previously reported interaction pattern was replicated in P600 responses to subsequent words. Our results suggest that structural and gender information is simultaneously taken into account, providing further evidence for retrieval interference during comprehension of grammatical sentences.
  • Martin, A. E., & McElree, B. (2009). Memory operations that support language comprehension: Evidence from verb-phrase ellipsis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(5), 1231-1239. doi:10.1037/a0016271.

    Abstract

    Comprehension of verb-phrase ellipsis (VPE) requires reevaluation of recently processed constituents, which often necessitates retrieval of information about the elided constituent from memory. A. E. Martin and B. McElree (2008) argued that representations formed during comprehension are content addressable and that VPE antecedents are retrieved from memory via a cue-dependent direct-access pointer rather than via a search process. This hypothesis was further tested by manipulating the location of interfering material—either before the onset of the antecedent (proactive interference; PI) or intervening between antecedent and ellipsis site (retroactive interference; RI). The speed–accuracy tradeoff procedure was used to measure the time course of VPE processing. The location of the interfering material affected VPE comprehension accuracy: RI conditions engendered lower accuracy than PI conditions. Crucially, location did not affect the speed of processing VPE, which is inconsistent with both forward and backward search mechanisms. The observed time-course profiles are consistent with the hypothesis that VPE antecedents are retrieved via a cue-dependent direct-access operation. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
  • Massaro, D. W., & Jesse, A. (2009). Read my lips: Speech distortions in musical lyrics can be overcome (slightly) by facial information. Speech Communication, 51(7), 604-621. doi:10.1016/j.specom.2008.05.013.

    Abstract

    Understanding the lyrics of many contemporary songs is difficult, and an earlier study [Hidalgo-Barnes, M., Massaro, D.W., 2007. Read my lips: an animated face helps communicate musical lyrics. Psychomusicology 19, 3–12] showed a benefit for lyrics recognition when seeing a computer-animated talking head (Baldi) mouthing the lyrics along with hearing the singer. However, the contribution of visual information was relatively small compared to what is usually found for speech. In the current experiments, our goal was to determine why the face appears to contribute less when aligned with sung lyrics than when aligned with normal speech presented in noise. The first experiment compared the contribution of the talking head with the originally sung lyrics versus the case when it was aligned with the Festival text-to-speech synthesis (TtS) spoken at the original duration of the song’s lyrics. A small and similar influence of the face was found in both conditions. In the three experiments, we compared the presence of the face when the durations of the TtS were equated with the duration of the original musical lyrics to the case when the lyrics were read with typical TtS durations and this speech embedded in noise. The results indicated that the unusual temporally distorted durations of musical lyrics decreases the contribution of the visible speech from the face.
  • Matic, D., & Nikolaeva, I. (2014). Realis mood, focus, and existential closure in Tundra Yukaghir. Lingua, 150, 202-231. doi:10.1016/j.lingua.2014.07.016.

    Abstract

    The nature and the typological validity of the categories ‘realis’ and ‘irrealis’ has been a matter of intensive debate. In this paper we analyse the realis/irrealis dichotomy in Tundra Yukaghir (isolate, north-eastern Siberia), and show that in this language realis is associated with a meaningful contribution, namely, existential quantification over events. This contribution must be expressed overtly by a combination of syntactic and prosodic means. Irrealis is the default category: the clause is interpreted as irrealis in the absence of the marker of realis. This implies that the relevant typological question may turn out to be the semantics of realis, rather than irrealis. We further argue that the Tundra Yukaghir realis is a hybrid category composed of elements from different domains (information structure, lexical semantics, and quantification) unified at the level of interpretation via pragmatic enrichment. The concept of notional mood must therefore be expanded to include moods which come about in interpretation and do not constitute a discrete denotation.
  • Mattys, S. L., & Scharenborg, O. (2014). Phoneme categorization and discrimination in younger and older adults: A comparative analysis of perceptual, lexical, and attentional factors. Psychology and Aging, 29(1), 150-162. doi:10.1037/a0035387.

    Abstract

    This study investigates the extent to which age-related language processing difficulties are due to a decline in sensory processes or to a deterioration of cognitive factors, specifically, attentional control. Two facets of attentional control were examined: inhibition of irrelevant information and divided attention. Younger and older adults were asked to categorize the initial phoneme of spoken syllables (“Was it m or n?”), trying to ignore the lexical status of the syllables. The phonemes were manipulated to range in eight steps from m to n. Participants also did a discrimination task on syllable pairs (“Were the initial sounds the same or different?”). Categorization and discrimination were performed under either divided attention (concurrent visual-search task) or focused attention (no visual task). The results showed that even when the younger and older adults were matched on their discrimination scores: (1) the older adults had more difficulty inhibiting lexical knowledge than did younger adults, (2) divided attention weakened lexical inhibition in both younger and older adults, and (3) divided attention impaired sound discrimination more in older than younger listeners. The results confirm the independent and combined contribution of sensory decline and deficit in attentional control to language processing difficulties associated with aging. The relative weight of these variables and their mechanisms of action are discussed in the context of theories of aging and language.
  • Mazuka, R., Hasegawa, M., & Tsuji, S. (2014). Development of non-native vowel discrimination: Improvement without exposure. Developmental Psychobiology, 56(2), 192-209. doi:10.1002/dev.21193.

    Abstract

    he present study tested Japanese 4.5- and 10-month old infants' ability to discriminate three German vowel pairs, none of which are contrastive in Japanese, using a visual habituation–dishabituation paradigm. Japanese adults' discrimination of the same pairs was also tested. The results revealed that Japanese 4.5-month old infants discriminated the German /bu:k/-/by:k/ contrast, but they showed no evidence of discriminating the /bi:k/-/be:k/ or /bu:k/-/bo:k/ contrasts. Japanese 10-month old infants, on the other hand, discriminated the German /bi:k/-/be:k/ contrast, while they showed no evidence of discriminating the /bu:k/-/by:k/ or /bu:k/-/bo:k/ contrasts. Japanese adults, in contrast, were highly accurate in their discrimination of all of the pairs. The results indicate that discrimination of non-native contrasts is not always easy even for young infants, and that their ability to discriminate non-native contrasts can improve with age even when they receive no exposure to a language in which the given contrast is phonemic. © 2013 Wiley Periodicals, Inc. Dev Psychobiol 56: 192–209, 2014.
  • McQueen, J. M. (2003). The ghost of Christmas future: Didn't Scrooge learn to be good? Commentary on Magnuson, McMurray, Tanenhaus and Aslin (2003). Cognitive Science, 27(5), 795-799. doi:10.1207/s15516709cog2705_6.

    Abstract

    Magnuson, McMurray, Tanenhaus, and Aslin [Cogn. Sci. 27 (2003) 285] suggest that they have evidence of lexical feedback in speech perception, and that this evidence thus challenges the purely feedforward Merge model [Behav. Brain Sci. 23 (2000) 299]. This evidence is open to an alternative explanation, however, one which preserves the assumption in Merge that there is no lexical-prelexical feedback during on-line speech processing. This explanation invokes the distinction between perceptual processing that occurs in the short term, as an utterance is heard, and processing that occurs over the longer term, for perceptual learning.
  • McQueen, J. M., & Huettig, F. (2014). Interference of spoken word recognition through phonological priming from visual objects and printed words. Attention, Perception & Psychophysics, 76, 190-200. doi:10.3758/s13414-013-0560-8.

    Abstract

    Three cross-modal priming experiments examined the influence of pre-exposure to
    pictures and printed words on the speed of spoken word recognition. Targets for
    auditory lexical decision were spoken Dutch words and nonwords, presented in
    isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory
    stimuli were preceded by primes which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime-target pairs were phonologically onsetrelated (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijlzwaard, arrow-sword), or were unrelated on both dimensions. Phonological
    interference and semantic facilitation were observed in all experiments. Priming
    magnitude was similar for pictures and printed words, and did not vary with picture
    viewing time or number of pictures in the display (either one or four). These effects
    arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision-making. This suggests
    that, by default, processing of related pictures and printed words influences how
    quickly we recognize related spoken words.
  • McQueen, J. M., Cutler, A., & Norris, D. (2003). Flow of information in the spoken word recognition system. Speech Communication, 41(1), 257-270. doi:10.1016/S0167-6393(02)00108-5.

    Abstract

    Spoken word recognition consists of two major component processes. First, at the prelexical stage, an abstract description of the utterance is generated from the information in the speech signal. Second, at the lexical stage, this description is used to activate all the words stored in the mental lexicon which match the input. These multiple candidate words then compete with each other. We review evidence which suggests that positive (match) and negative (mismatch) information of both a segmental and a suprasegmental nature is used to constrain this activation and competition process. We then ask whether, in addition to the necessary influence of the prelexical stage on the lexical stage, there is also feedback from the lexicon to the prelexical level. In two phonetic categorization experiments, Dutch listeners were asked to label both syllable-initial and syllable-final ambiguous fricatives (e.g., sounds ranging from [f] to [s]) in the word–nonword series maf–mas, and the nonword–word series jaf–jas. They tended to label the sounds in a lexically consistent manner (i.e., consistent with the word endpoints of the series). These lexical effects became smaller in listeners’ slower responses, even when the listeners were put under pressure to respond as fast as possible. Our results challenge models of spoken word recognition in which feedback modulates the prelexical analysis of the component sounds of a word whenever that word is heard
  • McQueen, J. M., Jesse, A., & Norris, D. (2009). No lexical–prelexical feedback during speech perception or: Is it time to stop playing those Christmas tapes? Journal of Memory and Language, 61, 1-18. doi:10.1016/j.jml.2009.03.002.

    Abstract

    The strongest support for feedback in speech perception comes from evidence of apparent lexical influence on prelexical fricative-stop compensation for coarticulation. Lexical knowledge (e.g., that the ambiguous final fricative of Christma? should be [s]) apparently influences perception of following stops. We argue that all such previous demonstrations can be explained without invoking lexical feedback. In particular, we show that one demonstration [Magnuson, J. S., McMurray, B., Tanenhaus, M. K., & Aslin, R. N. (2003). Lexical effects on compensation for coarticulation: The ghost of Christmash past. Cognitive Science, 27, 285–298] involved experimentally-induced biases (from 16 practice trials) rather than feedback. We found that the direction of the compensation effect depended on whether practice stimuli were words or nonwords. When both were used, there was no lexically-mediated compensation. Across experiments, however, there were lexical effects on fricative identification. This dissociation (lexical involvement in the fricative decisions but not in the following stop decisions made on the same trials) challenges interactive models in which feedback should cause both effects. We conclude that the prelexical level is sensitive to experimentally-induced phoneme-sequence biases, but that there is no feedback during speech perception.
  • Mead, S., Poulter, M., Uphill, J., Beck, J., Whitfield, J., Webb, T. E., Campbell, T., Adamson, G., Deriziotis, P., Tabrizi, S. J., Hummerich, H., Verzilli, C., Alpers, M. P., Whittaker, J. C., & Collinge, J. (2009). Genetic risk factors for variant Creutzfeldt-Jakob disease: A genome-wide association study. Lancet Neurology, 8(1), 57-66. doi:10.1016/S1474-4422(08)70265-5.

    Abstract

    BACKGROUND: Human and animal prion diseases are under genetic control, but apart from PRNP (the gene that encodes the prion protein), we understand little about human susceptibility to bovine spongiform encephalopathy (BSE) prions, the causal agent of variant Creutzfeldt-Jakob disease (vCJD).METHODS: We did a genome-wide association study of the risk of vCJD and tested for replication of our findings in samples from many categories of human prion disease (929 samples) and control samples from the UK and Papua New Guinea (4254 samples), including controls in the UK who were genotyped by the Wellcome Trust Case Control Consortium. We also did follow-up analyses of the genetic control of the clinical phenotype of prion disease and analysed candidate gene expression in a mouse cellular model of prion infection. FINDINGS: The PRNP locus was strongly associated with risk across several markers and all categories of prion disease (best single SNP [single nucleotide polymorphism] association in vCJD p=2.5 x 10(-17); best haplotypic association in vCJD p=1 x 10(-24)). Although the main contribution to disease risk was conferred by PRNP polymorphic codon 129, another nearby SNP conferred increased risk of vCJD. In addition to PRNP, one technically validated SNP association upstream of RARB (the gene that encodes retinoic acid receptor beta) had nominal genome-wide significance (p=1.9 x 10(-7)). A similar association was found in a small sample of patients with iatrogenic CJD (p=0.030) but not in patients with sporadic CJD (sCJD) or kuru. In cultured cells, retinoic acid regulates the expression of the prion protein. We found an association with acquired prion disease, including vCJD (p=5.6 x 10(-5)), kuru incubation time (p=0.017), and resistance to kuru (p=2.5 x 10(-4)), in a region upstream of STMN2 (the gene that encodes SCG10). The risk genotype was not associated with sCJD but conferred an earlier age of onset. Furthermore, expression of Stmn2 was reduced 30-fold post-infection in a mouse cellular model of prion disease. INTERPRETATION: The polymorphic codon 129 of PRNP was the main genetic risk factor for vCJD; however, additional candidate loci have been identified, which justifies functional analyses of these biological pathways in prion disease.
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2003). Planning levels in naming and reading complex numerals. Memory & Cognition, 31(8), 1238-1249.

    Abstract

    On the basis of evidence from studies of the naming and reading of numerals, Ferrand (1999) argued that the naming of objects is slower than reading their names, due to a greater response uncertainty in naming than in reading, rather than to an obligatory conceptual preparation for naming, but not for reading. We manipulated the need for conceptual preparation, while keeping response uncertainty constant in the naming and reading of complex numerals. In Experiment 1, participants named three-digit Arabic numerals either as house numbers or clock times. House number naming latencies were determined mostly by morphophonological factors, such as morpheme frequency and the number of phonemes, whereas clock time naming latencies revealed an additional conceptual involvement. In Experiment 2, the numerals were presented in alphabetic format and had to be read aloud. Reading latencies were determined mostly by morphophonological factors in both modes. These results suggest that conceptual preparation, rather than response uncertainty, is responsible for the difference between naming and reading latencies.
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2004). Naming analog clocks conceptually facilitates naming digital clocks. Brain and Language, 90(1-3), 434-440. doi:10.1016/S0093-934X(03)00454-1.

    Abstract

    This study investigates how speakers of Dutch compute and produce relative time expressions. Naming digital clocks (e.g., 2:45, say ‘‘quarter to three’’) requires conceptual operations on the minute and hour information for the correct relative time expression. The interplay of these conceptual operations was investigated using a repetition priming paradigm. Participants named analog clocks (the primes) directly before naming digital clocks (the targets). The targets referred to the hour (e.g., 2:00), half past the hour (e.g., 2:30), or the coming hour (e.g., 2:45). The primes differed from the target in one or two hour and in five or ten minutes. Digital clock naming latencies were shorter with a five- than with a ten-min difference between prime and target, but the difference in hour had no effect. Moreover, the distance in minutes had only an effect for half past the hour and the coming hour, but not for the hour. These findings suggest that conceptual facilitation occurs when conceptual transformations are shared between prime and target in telling time.
  • Mehta, G., & Cutler, A. (1988). Detection of target phonemes in spontaneous and read speech. Language and Speech, 31, 135-156.

    Abstract

    Although spontaneous speech occurs more frequently in most listeners’ experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalize to the recognition of spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Response were, overall, equally fast in each speech mode. However analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than earlier targets, and targets preceded by long words were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claim from previous work that listeners pay great attention to prosodic information in the process of recognizing speech.
  • Melinger, A., & Levelt, W. J. M. (2004). Gesture and the communicative intention of the speaker. Gesture, 4(2), 119-141.

    Abstract

    This paper aims to determine whether iconic tracing gestures produced while speaking constitute part of the speaker’s communicative intention. We used a picture description task in which speakers must communicate the spatial and color information of each picture to an interlocutor. By establishing the necessary minimal content of an intended message, we determined whether speech produced with concurrent gestures is less explicit than speech without gestures. We argue that a gesture must be communicatively intended if it expresses necessary information that was nevertheless omitted from speech. We found that speakers who produced iconic gestures representing spatial relations omitted more required spatial information from their descriptions than speakers who did not gesture. These results provide evidence that speakers intend these gestures to communicate. The results have implications for the cognitive architectures that underlie the production of gesture and speech.
  • Menenti, L., Petersson, K. M., Scheeringa, R., & Hagoort, P. (2009). When elephants fly: Differential sensitivity of right and left inferior frontal gyri to discourse and world knowledge. Journal of Cognitive Neuroscience, 21, 2358-2368. doi:10.1162/jocn.2008.21163.

    Abstract

    Both local discourse and world knowledge are known to influence sentence processing. We investigated how these two sources of information conspire in language comprehension. Two types of critical sentences, correct and world knowledge anomalies, were preceded by either a neutral or a local context. The latter made the world knowledge anomalies more acceptable or plausible. We predicted that the effect of world knowledge anomalies would be weaker for the local context. World knowledge effects have previously been observed in the left inferior frontal region (Brodmann's area 45/47). In the current study, an effect of world knowledge was present in this region in the neutral context. We also observed an effect in the right inferior frontal gyrus, which was more sensitive to the discourse manipulation than the left inferior frontal gyrus. In addition, the left angular gyrus reacted strongly to the degree of discourse coherence between the context and critical sentence. Overall, both world knowledge and the discourse context affect the process of meaning unification, but do so by recruiting partly different sets of brain areas.
  • Menon, S., Rosenberg, K., Graham, S. A., Ward, E. M., Taylor, M. E., Drickamer, K., & Leckband, D. E. (2009). Binding-site geometry and flexibility in DC-SIGN demonstrated with surface force measurements. Proceedings of the National Academy of Sciences of the United States of America, 106, 11524-11529. doi:10.1073/pnas.0901783106.

    Abstract

    The dendritic cell receptor DC-SIGN mediates pathogen recognition by binding to glycans characteristic of pathogen surfaces, including those found on HIV. Clustering of carbohydrate-binding sites in the receptor tetramer is believed to be critical for targeting of pathogen glycans, but the arrangement of these sites remains poorly understood. Surface force measurements between apposed lipid bilayers displaying the extracellular domain of DC-SIGN and a neoglycolipid bearing an oligosaccharide ligand provide evidence that the receptor is in an extended conformation and that glycan docking is associated with a conformational change that repositions the carbohydrate-recognition domains during ligand binding. The results further show that the lateral mobility of membrane-bound ligands enhances the engagement of multiple carbohydrate-recognition domains in the receptor oligomer with appropriately spaced ligands. These studies highlight differences between pathogen targeting by DC-SIGN and receptors in which binding sites at fixed spacing bind to simple molecular patterns

    Additional information

    Menon_2009_Supporting_Information.pdf
  • Meulenbroek, O., Petersson, K. M., Voermans, N., Weber, B., & Fernández, G. (2004). Age differences in neural correlates of route encoding and route recognition. Neuroimage, 22, 1503-1514. doi:10.1016/j.neuroimage.2004.04.007.

    Abstract

    Spatial memory deficits are core features of aging-related changes in cognitive abilities. The neural correlates of these deficits are largely unknown. In the present study, we investigated the neural underpinnings of age-related differences in spatial memory by functional MRI using a navigational memory task with route encoding and route recognition conditions. We investigated 20 healthy young (18 - 29 years old) and 20 healthy old adults (53 - 78 years old) in a random effects analysis. Old subjects showed slightly poorer performance than young subjects. Compared to the control condition, route encoding and route recognition showed activation of the dorsal and ventral visual processing streams and the frontal eye fields in both groups of subjects. Compared to old adults, young subjects showed during route encoding stronger activations in the dorsal and the ventral visual processing stream (supramarginal gyrus and posterior fusiform/parahippocampal areas). In addition, young subjects showed weaker anterior parahippocampal activity during route recognition compared to the old group. In contrast, old compared to young subjects showed less suppressed activity in the left perisylvian region and the anterior cingulate cortex during route encoding. Our findings suggest that agerelated navigational memory deficits might be caused by less effective route encoding based on reduced posterior fusiform/parahippocampal and parietal functionality combined with diminished inhibition of perisylvian and anterior cingulate cortices correlated with less effective suppression of task-irrelevant information. In contrast, age differences in neural correlates of route recognition seem to be rather subtle. Old subjects might show a diminished familiarity signal during route recognition in the anterior parahippocampal region.

Share this page