Publications

Displaying 301 - 400 of 840
  • Hulten, A., Laaksonen, H., Vihla, M., Laine, M., & Salmelin, R. (2010). Modulation of brain activity after learning predicts long-term memory for words. Journal of Neuroscience, 30(45), 15160-15164. doi:10.1523/​JNEUROSCI.1278-10.2010.

    Abstract

    The acquisition and maintenance of new language information, such as picking up new words, is a critical human ability that is needed throughout the life span. Most likely you learned the word “blog” quite recently as an adult, whereas the word “kipe,” which in the 1970s denoted stealing, now seems unfamiliar. Brain mechanisms underlying the long-term maintenance of new words have remained unknown, albeit they could provide important clues to the considerable individual differences in the ability to remember words. After successful training of a set of novel object names we tracked, over a period of 10 months, the maintenance of this new vocabulary in 10 human participants by repeated behavioral tests and magnetoencephalography measurements of overt picture naming. When namingrelated activation in the left frontal and temporal cortex was enhanced 1 week after training, compared with the level at the end of training, the individual retained a good command of the new vocabulary at 10 months; vice versa, individuals with reduced activation at 1 week posttraining were less successful in recalling the names at 10 months. This finding suggests an individual neural marker for memory, in the context of language. Learning is not over when the acquisition phase has been successfully completed: neural events during the access to recently established word representations appear to be important for the long-term outcome of learning.
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P., & Gullberg, M. (2010). Foreword. Language Learning, 60(S2), v. doi:10.1111/j.1467-9922.2010.00596.x.

    Abstract

    The articles in this volume are the result of an invited conference entitled "The Earliest Stages of Language Learning" held at the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, in October 2009.
  • Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. Language Learning, 60(S2), 1-4. doi:10.1111/j.1467-9922.2010.00597.x.
  • Ingason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J. and 20 moreIngason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Walshe, M., Vassos, E., Di Forti, M., Murray, R., Bonetto, C., Tosato, S., Cantor, R. M., Rietschel, M., Craddock, N., Owen, M. J., Andreassen, O. A., Nothen, M. M., Peltonen, L., St. Clair, D., Ophoff, R. A., O’Donovan, M. C., Collier, D. A., Werge, T., & Rujescu, D. (2010). A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia. Human Molecular Genetics, 19(7), 1379-1386. doi:10.1093/hmg/ddq009.

    Abstract

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample. Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both the replication study of new samples and the meta-analysis show evidence for significant overrepresentation of all tested alleles in patients compared with controls (meta-analysis; P = 8.2 x 10(-5)-1.7 x 10(-3), common OR = 1.09-1.11). The region contains two genes, AHI1 and C6orf217, and both genes-as well as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia.
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Jackson, C., & Roberts, L. (2010). Animacy affects the processing of subject–object ambiguities in the second language: Evidence from self-paced reading with German second language learners of Dutch. Applied Psycholinguistics, 31(4), 671-691. doi:10.1017/S0142716410000196.

    Abstract

    The results of a self-paced reading study with German second language (L2) learners of Dutch showed that noun animacy affected the learners' on-line commitments when comprehending relative clauses in their L2. Earlier research has found that German L2 learners of Dutch do not show an on-line preference for subject–object word order in temporarily ambiguous relative clauses when no disambiguating material is available prior to the auxiliary verb. We investigated whether manipulating the animacy of the ambiguous noun phrases would push the learners to make an on-line commitment to either a subject- or object-first analysis. Results showed they performed like Dutch native speakers in that their reading times reflected an interaction between topichood and animacy in the on-line assignment of grammatical roles
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E., De Bree, E., & Brouwer, S. (2010). Decreased sensitivity to phonemic mismatch in spoken word processing in adult developmental dyslexia. Journal of Psycholinguistic Research, 39(6), 523-539. doi:10.1007/s10936-010-9150-2.

    Abstract

    Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as procodile for crocodile) for the atypical population of dyslexic adults to see to what extent mismatching phonemic information affects lexical activation in the face of overwhelming support for one specific lexical candidate. Results of an auditory lexical decision task showed that sensitivity to phonemic mismatch was less in the dyslexic population, compared to the respective control group. However, the dyslexic participants were outperformed by their controls only for word-initial mismatches. It is argued that a subtle speech decoding deficit affects lexical activation levels and makes spoken word processing less robust against distortion.
  • Janse, E. (2010). Spoken word processing and the effect of phonemic mismatch in aphasia. Aphasiology, 24(1), 3-27. doi:10.1080/02687030802339997.

    Abstract

    Background: There is evidence that, unlike in typical populations, initial lexical activation upon hearing spoken words in aphasic patients is not a direct reflection of the goodness of fit between the presented stimulus and the intended target. Earlier studies have mainly used short monosyllabic target words. Short words are relatively difficult to recognise because they are not highly redundant: changing one phoneme will often result in a (similar-sounding) different word. Aims: The present study aimed to investigate sensitivity of the lexical recognition system in aphasia. The focus was on longer words that contain more redundancy, to investigate whether aphasic adults might be impaired in deactivation of strongly activated lexical candidates. This was done by studying lexical activation upon presentation of spoken polysyllabic pseudowords (such as procodile) to see to what extent mismatching phonemic information leads to deactivation in the face of overwhelming support for one specific lexical candidate. Methods & Procedures: Speeded auditory lexical decision was used to investigate response time and accuracy to pseudowords with a word-initial or word-final phonemic mismatch in 21 aphasic patients and in an age-matched control group. Outcomes & Results: Results of an auditory lexical decision task showed that aphasic participants were less sensitive to phonemic mismatch if there was strong evidence for one particular lexical candidate, compared to the control group. Classifications of patients as Broca's vs Wernicke's or as fluent vs non-fluent did not reveal differences in sensitivity to mismatch between aphasia types. There was no reliable relationship between measures of auditory verbal short-term memory and lexical decision performance. Conclusions: It is argued that the aphasic results can best be viewed as lexical “overactivation” and that a verbal short-term memory account is less appropriate.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2015). Lexical specificity training effects in second language learners. Language Learning, 65(2), 358-389. doi:10.1111/lang.12102.

    Abstract

    Children who start formal education in a second language may experience slower vocabulary growth in that language and subsequently experience disadvantages in literacy acquisition. The current study asked whether lexical specificity training can stimulate bilingual children's phonological awareness, which is considered to be a precursor to literacy. Therefore, Dutch monolingual and Turkish-Dutch bilingual children were taught new Dutch words with only minimal acoustic-phonetic differences. As a result of this training, the monolingual and the bilingual children improved on phoneme blending, which can be seen as an early aspect of phonological awareness. During training, the bilingual children caught up with the monolingual children on words with phonological overlap between their first language Turkish and their second language Dutch. It is concluded that learning minimal pair words fosters phoneme awareness, in both first and second language preliterate children, and that for second language learners phonological overlap between the two languages positively affects training outcomes, likely due to linguistic transfer
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Järvikivi, J., Vainio, M., & Aalto, D. (2010). Real-time correlates of phonological quantity reveal unity of tonal and non-tonal languages. Plos One, 5(9), e12603. doi:10.1371/journal.pone.0012603.

    Abstract

    Discrete phonological phenomena form our conscious experience of language: continuous changes in pitch appear as distinct tones to the speakers of tone languages, whereas the speakers of quantity languages experience duration categorically. The categorical nature of our linguistic experience is directly reflected in the traditionally clear-cut linguistic classification of languages into tonal or non-tonal. However, some evidence suggests that duration and pitch are fundamentally interconnected and co-vary in signaling word meaning in non-tonal languages as well. We show that pitch information affects real-time language processing in a (non-tonal) quantity language. The results suggest that there is no unidirectional causal link from a genetically-based perceptual sensitivity towards pitch information to the appearance of a tone language. They further suggest that the contrastive categories tone and quantity may be based on simultaneously co-varying properties of the speech signal and the processing system, even though the conscious experience of the speakers may highlight only one discrete variable at a time.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jesse, A., & Massaro, D. W. (2010). Seeing a singer helps comprehension of the song's lyrics. Psychonomic Bulletin & Review, 17, 323-328.

    Abstract

    When listening to speech, we often benefit when also seeing the speaker talk. If this benefit is not domain-specific for speech, then the recognition of sung lyrics should likewise benefit from seeing the singer. Nevertheless, previous research failed to obtain a substantial improvement in that domain. Our study shows that this failure was not due to inherent differences between singing and speaking but rather to less informative visual presentations. By presenting a professional singer, we found a substantial audiovisual benefit of about 35% improvement for lyrics recognition. This benefit was further robust across participants, phrases, and repetition of the test materials. Our results provide the first evidence that lyrics recognition just like speech and music perception is a multimodal process.
  • Jesse, A., & Massaro, D. W. (2010). The temporal distribution of information in audiovisual spoken-word identification. Attention, Perception & Psychophysics, 72(1), 209-225. doi:10.3758/APP.72.1.209.

    Abstract

    In the present study, we examined the distribution and processing of information over time in auditory and visual speech as it is used in unimodal and bimodal word recognition. English consonant—vowel—consonant words representing all possible initial consonants were presented as auditory, visual, or audiovisual speech in a gating task. The distribution of information over time varied across and within features. Visual speech information was generally fully available early during the phoneme, whereas auditory information was still accumulated. An audiovisual benefit was therefore already found early during the phoneme. The nature of the audiovisual recognition benefit changed, however, as more of the phoneme was presented. More features benefited at short gates rather than at longer ones. Visual speech information plays, therefore, a more important role early during the phoneme rather than later. The results of the study showed the complex interplay of information across modalities and time, since this is essential in determining the time course of audiovisual spoken-word recognition.
  • Jiang, J., Chen, C., Dai, B., Shi, G., Liu, L., & Lu, C. (2015). Leader emergence through interpersonal neural synchronization. Proceedings of the National Academy of Sciences of the United States of America, 112(14), 4274-4279. doi:10.1073/pnas.1422930112.

    Abstract

    The neural mechanism of leader emergence is not well understood. This study investigated (i) whether interpersonal neural synchronization (INS) plays an important role in leader emergence, and (ii) whether INS and leader emergence are associated with the frequency or the quality of communications. Eleven three-member groups were asked to perform a leaderless group discussion (LGD) task, and their brain activities were recorded via functional near infrared spectroscopy (fNIRS)-based hyperscanning. Video recordings of the discussions were coded for leadership and communication. Results showed that the INS for the leader–follower (LF) pairs was higher than that for the follower–follower (FF) pairs in the left temporo-parietal junction (TPJ), an area important for social mentalizing. Although communication frequency was higher for the LF pairs than for the FF pairs, the frequency of leader-initiated and follower-initiated communication did not differ significantly. Moreover, INS for the LF pairs was significantly higher during leader-initiated communication than during follower-initiated communications. In addition, INS for the LF pairs during leader-initiated communication was significantly correlated with the leaders’ communication skills and competence, but not their communication frequency. Finally, leadership could be successfully predicted based on INS as well as communication frequency early during the LGD (before half a minute into the task). In sum, this study found that leader emergence was characterized by high-level neural synchronization between the leader and followers and that the quality, rather than the frequency, of communications was associated with synchronization. These results suggest that leaders emerge because they are able to say the right things at the right time.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Johnson, E. K., & Tyler, M. (2010). Testing the limits of statistical learning for word segmentation. Developmental Science, 13, 339-345. doi:10.1111/j.1467-7687.2009.00886.x.

    Abstract

    Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5- and 8-month-olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5-month-olds are extremely sensitive to the conditional probabilities in their environment. However, either age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested.
  • Jongman, S. R., Roelofs, A., & Meyer, A. S. (2015). Sustained attention in language production: An individual differences investigation. Quarterly Journal of Experimental Psychology, 68, 710-730. doi:10.1080/17470218.2014.964736.

    Abstract

    Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that some form of attention is required. Here, we investigated the contribution of sustained attention, which is the ability to maintain alertness over time. First, the sustained attention ability of participants was measured using auditory and visual continuous performance tasks. Next, the participants described pictures using simple noun phrases while their response times (RTs) and gaze durations were measured. Earlier research has suggested that gaze duration reflects language planning processes up to and including phonological encoding. Individual differences in sustained attention ability correlated with individual differences in the magnitude of the tail of the RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. These results suggest that language production requires sustained attention, especially after phonological encoding.
  • Jongman, S. R., Meyer, A. S., & Roelofs, A. (2015). The role of sustained attention in the production of conjoined noun phrases: An individual differences study. PLoS One, 10(9): e0137557. doi:10.1371/journal.pone.0137557.

    Abstract

    It has previously been shown that language production, performed simultaneously with a nonlinguistic task, involves sustained attention. Sustained attention concerns the ability to maintain alertness over time. Here, we aimed to replicate the previous finding by showing that individuals call upon sustained attention when they plan single noun phrases (e.g., "the carrot") and perform a manual arrow categorization task. In addition, we investigated whether speakers also recruit sustained attention when they produce conjoined noun phrases (e.g., "the carrot and the bucket") describing two pictures, that is, when both the first and second task are linguistic. We found that sustained attention correlated with the proportion of abnormally slow phrase-production responses. Individuals with poor sustained attention displayed a greater number of very slow responses than individuals with better sustained attention. Importantly, this relationship was obtained both for the production of single phrases while performing a nonlinguistic manual task, and the production of noun phrase conjunctions in referring to two spatially separated objects. Inhibition and updating abilities were also measured. These scores did not correlate with our measure of sustained attention, suggesting that sustained attention and executive control are distinct. Overall, the results suggest that planning conjoined noun phrases involves sustained attention, and that language production happens less automatically than has often been assumed.
  • Jordan, F., & Dunn, M. (2010). Kin term diversity is the result of multilevel, historical processes [Comment on Doug Jones]. Behavioral and Brain Sciences, 33, 388. doi:10.1017/S0140525X10001962.

    Abstract

    Explanations in the domain of kinship can be sought on several different levels: Jones addresses online processing, as well as issues of origins and innateness. We argue that his framework can more usefully be applied at the levels of developmental and historical change, the latter especially. A phylogenetic approach to the diversity of kinship terminologies is most urgently required.
  • Jordens, P. (2004). Systematiek en dynamiek bij de verwerving van Finietheid. Toegepaste Taalwetenschap in Artikelen, 71, 9-22.

    Abstract

    In early Dutch learner varieties, there is no evidence of finiteness being a functional category. There is no V2nd: no correlation between inflectional morphology and movement. Initially, learners express the illocutive function of finiteness through the use of illocutive markers, with the non-use of an illocutive marker expressing the default illocutive function of assertion. Illocutive markers are functioning as adjuncts with scope over the predicate. Illocutive markers become re-analysed as functional elements.The driving force is the acquisition of the auxiliary verbs that occur with past participles. It leads to a reanalysis of illocutive markers as two separate elements: an auxiliary verb and a scope adverb. The (modal) auxiliary carries illocutive function. Lexical verb-argument structure (including the external argument) occurs within the domain of the auxiliary verb. The predicate as the focus constituent occurs within the domain of a scope adverb. This reanalysis establishes a position for the external argument within the domain of AUX. The acquisition of AUX causes the acquisition of a (hierarchical) structure with a complement as a constituent which represents an underlying verb-argument structure, a predicate as the domain of elements that are in focus, and an external (specifier) position as a landing site for elements with topic function.
  • Karlebach, G., & Francks, C. (2015). Lateralization of gene expression in human language cortex. Cortex, 67, 30-36. doi:10.1016/j.cortex.2015.03.003.

    Abstract

    Lateralization is an important aspect of the functional brain architecture for language and other cognitive faculties. The molecular genetic basis of human brain lateralization is unknown, and recent studies have suggested that gene expression in the cerebral cortex is bilaterally symmetrical. Here we have re-analyzed two transcriptomic datasets derived from post mortem human cerebral cortex, with a specific focus on superior temporal and auditory language cortex in adults. We applied an empirical Bayes approach to model differential left-right expression, together with gene ontology analysis and meta-analysis. There was robust and reproducible lateralization of individual genes and gene ontology groups that are likely to fine-tune the electrophysiological and neurotransmission properties of cortical circuits, most notably synaptic transmission, nervous system development and glutamate receptor activity. Our findings anchor the cerebral biology of language to the molecular genetic level. Future research in model systems may determine how these molecular signatures of neurophysiological lateralization effect fine-tuning of cerebral cortical function, differently in the two hemispheres.
  • Kartushina, N., Hervais-Adelman, A., Frauenfelder, U. H., & Golestani, N. (2015). The effect of phonetic production training with visual feedback on the perception and production of foreign speech sounds. The Journal of the Acoustical Society of America, 138(2), 817-832. doi:10.1121/1.4926561.

    Abstract

    Second-language learners often experience major difficulties in producing non-native speech sounds. This paper introduces a training method that uses a real-time analysis of the acoustic properties of vowels produced by non-native speakers to provide them with immediate, trial-by-trial visual feedback about their articulation alongside that of the same vowels produced by native speakers. The Mahalanobis acoustic distance between non-native productions and target native acoustic spaces was used to assess L2 production accuracy. The experiment shows that 1 h of training per vowel improves the production of four non-native Danish vowels: the learners' productions were closer to the corresponding Danish target vowels after training. The production performance of a control group remained unchanged. Comparisons of pre- and post-training vowel discrimination performance in the experimental group showed improvements in perception. Correlational analyses of training-related changes in production and perception revealed no relationship. These results suggest, first, that this training method is effective in improving non-native vowel production. Second, training purely on production improves perception. Finally, it appears that improvements in production and perception do not systematically progress at equal rates within individuals. (C) 2015 Acoustical Society of America.
  • Kelly, B. F., Kidd, E., & Wigglesworth, G. (2015). Indigenous children's language: Acquisition, preservation and evolution of language in minority contexts. First Language, 35(4-5), 279-285. doi:10.1177/0142723715618056.

    Abstract

    A comprehensive theory of language acquisition must explain how human infants can learn any one of the world’s 7000 or so languages. As such, an important part of understanding how languages are learned is to investigate acquisition across a range of diverse languages and sociocultural contexts. To this end, cross-linguistic and cross-cultural language research has been pervasive in the field of first language acquisition since the early 1980s. In groundbreaking work, Slobin (1985) noted that the study of acquisition in cross-linguistic perspective can be used to reveal both developmental universals and language-specific acquisition patterns. Since this observation there have been several waves of cross-linguistic first language acquisition research, and more recently we have seen a rise in research investigating lesser-known languages. This special issue brings together work on several such languages, spoken in minority contexts. It is the first collection of language development research dedicated to the acquisition of under-studied or little-known languages and by extension, different cultures. Why lesser-known languages, and why minority contexts? First and foremost, acquisition theories need data from different languages, language families and cultural groups across the broadest typological array possible, and yet many theories of acquisition have been developed through analyses of English and other major world languages. Thus they are likely to be skewed by sampling bias. Languages of European origin constitute a small percentage of the total number of languages spoken worldwide. The Ethnologue (2015) lists 7102 languages spoken across the world. Of these, only 286 languages are languages of European origin, a mere 4% of the total number of languages spoken across the planet, and representing approximately only 26% of the total number of language speakers alive today. Compare this to the languages of the Pacific. The Ethnologue lists 1313 languages spoken in the Pacific, constituting 18.5% of the world’s languages. Of these, very few have been described, and even fewer have child language data available. Lieven and Stoll (2010) note that only around 70–80 languages have been the focus of acquisition studies (around 1% of the world’s languages). This somewhat alarming statistic suggests that the time is now ripe for researchers working on lesser-known languages to contribute to the field’s knowledge about how children learn a range of very different languages across differing cultures, and in doing so, for this research to make a contribution to language acquisition theory. The potential benefits are many. First, decades of descriptive work in linguistic typology have culminated in strong challenges to the existence of a Universal Grammar (Evans & Levinson, 2009), a long-held axiom of formal language acquisition theory. To be sure, cross-linguistic work in acquisition has long fuelled this debate (e.g. MacWhinney & Bates, 1989), but only as we collect a greater number of data points will we move closer toward a better understanding of the initial state of the human capacity for language and the types of social and cultural contexts in which language is successfully transmitted. A focus on linguistic diversity enables the investigation and postulation of universals in language acquisition, if and in whatever form they exist. In doing so, we can determine the sorts of things that are evident in child-directed speech, in children’s language production and in adult language, teasing out the threads at the intersection of language, culture and cognition. The study and dissemination of research into lesser-known, under-described languages with small communities significantly contributes to this aim because it not only reflects the diversity of languages present in the world, but provides a better representation of the social and economic conditions under which the majority of the world’s population acquire language (Heinrich, Heins, & Norenzayan, 2010). Related to this point, the study of smaller languages has taken on intense urgency in the past few decades due to the rapid extinction of these languages (Evans, 2010). The Language Documentation movement has toiled tirelessly in the pursuit of documenting languages before they disappear, an effort to which child language researchers have much to offer. Many children acquire smaller and minority languages in rich multilingual environments, where the influence of dominant languages affects acquisition (e.g., Stoll, Zakharko, Moran, Schikowski, & Bickel, 2015). Understanding the acquisition process where systems compete and may be in flux due to language contact, while no small task, will help us understand the social and economic conditions which favour successful preservation of minority languages, which could ultimately equip communities with the tools to stem the flow of language loss. With these points in mind we now turn to the articles in this special issue.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2015). The processing of speech, gesture and action during language comprehension. Psychonomic Bulletin & Review, 22, 517-523. doi:10.3758/s13423-014-0681-7.

    Abstract

    Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information contents were congruent, and for the other half, they were incongruent. For all subjects, stimuli in which the gestures and actions were incongruent with the speech produced more errors and longer response times than did stimuli that were congruent, but this effect was less prominent for speech-action stimuli than for speech-gesture stimuli. However, subjects focusing on visual targets were more accurate when processing actions than gestures. These results suggest that although actions may be easier to process than gestures, gestures may be more tightly tied to the processing of accompanying speech.
  • Kelly, S. D., Ozyurek, A., & Maris, E. (2010). Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension. Psychological Science, 21, 260-267. doi:10.1177/0956797609357327.

    Abstract

    Gesture and speech are assumed to form an integrated system during language production. Based on this view, we propose the integrated‐systems hypothesis, which explains two ways in which gesture and speech are integrated—through mutual and obligatory interactions—in language comprehension. Experiment 1 presented participants with action primes (e.g., someone chopping vegetables) and bimodal speech and gesture targets. Participants related primes to targets more quickly and accurately when they contained congruent information (speech: “chop”; gesture: chop) than when they contained incongruent information (speech: “chop”; gesture: twist). Moreover, the strength of the incongruence affected processing, with fewer errors for weak incongruities (speech: “chop”; gesture: cut) than for strong incongruities (speech: “chop”; gesture: twist). Crucial for the integrated‐systems hypothesis, this influence was bidirectional. Experiment 2 demonstrated that gesture’s influence on speech was obligatory. The results confirm the integrated‐systems hypothesis and demonstrate that gesture and speech form an integrated system in language comprehension.
  • Kempen, G. (1973). [Review of the book Psycholinguïstiek by B. Tervoort et al.]. Nederlands Tijdschrift voor de Psychologie, 28, 172-174.
  • Kempen, G., & Harbusch, K. (2003). An artificial opposition between grammaticality and frequency: Comment on Bornkessel, Schlesewsky & Friederici (2002). Cognition, 90(2), 205-210 [Rectification on p. 215]. doi:10.1016/S0010-0277(03)00145-8.

    Abstract

    In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim “show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing” (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.
  • Kempen, G. (1996). Lezen, leren lezen, dyslexie: De auditieve basis van visuele woordherkenning. Nederlands Tijdschrift voor de Psychologie, 51, 91-100.
  • Kempen, G. (1985). Psychologie 2000. Toegepaste psychologie in de informatiemaatschappij. Computers in de psychologie, 13-21.
  • Kempen, G. (1996). Wetenschap op internet: Een voorstel voor de Nederlandse Psychonomie. Nieuwsbrief Nederlandse Vereniging voor Psychonomie, 3, 5-8.
  • Kemps, R. J. J. K., Ernestus, M., Schreuder, R., & Baayen, R. H. (2004). Processing reduced word forms: The suffix restoration effect. Brain and Language, 90(1-3), 117-127. doi:10.1016/S0093-934X(03)00425-5.

    Abstract

    Listeners cannot recognize highly reduced word forms in isolation, but they can do so when these forms are presented in context (Ernestus, Baayen, & Schreuder, 2002). This suggests that not all possible surface forms of words have equal status in the mental lexicon. The present study shows that the reduced forms are linked to the canonical representations in the mental lexicon, and that these latter representations induce reconstruction processes. Listeners restore suffixes that are partly or completely missing in reduced word forms. A series of phoneme-monitoring experiments reveals the nature of this restoration: the basis for suffix restoration is mainly phonological in nature, but orthography has an influence as well.
  • Kendrick, K. H. (2015). Other-initiated repair in English. Open Linguistics, 1, 164-190. doi:10.2478/opli-2014-0009.

    Abstract

    The practices of other-initiation of repair provide speakers with a set of solutions to one of the most basic problems in conversation: troubles of speaking, hearing, and understanding. Based on a collection of 227 cases systematically identified in a corpus of English conversation, this article describes the formats and practices of other-initiations of repair attested in the corpus and reports their quantitative distribution. In addition to straight other-initiations of repair, the identification of all possible cases also yielded a substantial proportion in which speakers use other-initiations to perform other actions, including non-serious actions, such as jokes and teases, preliminaries to dispreferred responses, and displays of surprise and disbelief. A distinction is made between other-initiations that perform additional actions concurrently and those that formally resemble straight other-initiations but analyzably do not initiate repair as an action.
  • Kendrick, K. H. (2015). The intersection of turn-taking and repair: The timing of other-initiations of repair in conversation. Frontiers in Psychology, 6: 250. doi:10.3389/fpsyg.2015.00250.

    Abstract

    The transitions between turns at talk in conversation tend to occur quickly, with only a slight gap of approximately 100 to 300 ms between them. This estimate of central tendency, however, hides a wealth of complex variation, as a number of factors, such as the type of turns involved, have been shown to influence the timing of turn transitions. This article considers one specific type of turn that does not conform to the statistical trend, namely turns that deal with troubles of speaking, hearing, and understanding, known as other-initiations of repair. The results of a quantitative analysis of 169 other-initiations of repair in face-to-face conversation reveal that the most frequent cases occur after gaps of approximately 700 ms. Furthermore, other-initiations of repair that locate a source of trouble in a prior turn specifically tend to occur after shorter gaps than those that do not, and those that correct errors in a prior turn, while rare, tend to occur without delay. An analysis of the transitions before other-initiations of repair, using methods of conversation analysis, suggests that speakers use the extra time (i) to search for a late recognition of the problematic turn, (ii) to provide an opportunity for the speaker of the problematic turn to resolve the trouble independently, (iii) and to produce visual signals, such as facial gestures. In light of these results, it is argued that other-initiations of repair take priority over other turns at talk in conversation and therefore are not subject to the same rules and constraints that motivate fast turn transitions in general
  • Kendrick, K. H., & Torreira, F. (2015). The timing and construction of preference: A quantitative study. Discourse Processes, 52(4), 255-289. doi:10.1080/0163853X.2014.955997.

    Abstract

    Conversation-analytic research has argued that the timing and construction of preferred responding actions (e.g., acceptances) differ from that of dispreferred responding actions (e.g., rejections), potentially enabling early response prediction by recipients. We examined 195 preferred and dispreferred responding actions in telephone corpora and found that the timing of the most frequent cases of each type did not differ systematically. Only for turn transitions of 700 ms or more was the proportion of dispreferred responding actions clearly greater than that of preferreds. In contrast, an analysis of the timing that included turn formats (i.e., those with or without qualification) revealed clearer differences. Small departures from a normal gap duration decrease the likelihood of a preferred action in a preferred turn format (e.g., a simple “yes”). We propose that the timing of a response is best understood as a turn-constructional feature, the first virtual component of a preferred or dispreferred turn format.
  • Kidd, E., Chan, A., & Chiu, J. (2015). Cross-linguistic influence in simultaneous Cantonese–English bilingual children's comprehension of relative clauses. Bilingualism: Language and Cognition, 18(3), 438-452. doi:10.1017/S1366728914000649.

    Abstract

    The current study investigated the role of cross-linguistic influence in Cantonese–English bilingual children's comprehension of subject- and object-extracted relative clauses (RCs). Twenty simultaneous Cantonese–English bilingual children (Mage = 8;11, SD = 2;6) and 20 vocabulary-matched Cantonese monolingual children (Mage = 6;4, SD = 1;3) completed a test of Cantonese RC comprehension. The bilingual children also completed a test of English RC comprehension. The results showed that, whereas the monolingual children were equally competent on subject and object RCs, the bilingual children performed significantly better on subject RCs. Error analyses suggested that the bilingual children were most often correctly assigning thematic roles in object RCs, but were incorrectly choosing the RC subject as the head referent. This pervasive error was interpreted to be due to the fact that both Cantonese and English have canonical SVO word order, which creates competition with structures that compete with an object RC analysis.
  • Kidd, E. (2015). Incorporating learning into theories of parsing. Linguistic Approaches to Bilingualism, 5(4), 487-493. doi:10.1075/lab.5.4.08kid.
  • Kidd, E. (2004). Grammars, parsers, and language acquisition. Journal of Child Language, 31(2), 480-483. doi:10.1017/S0305000904006117.

    Abstract

    Drozd's critique of Crain & Thornton's (C&T) (1998) book Investigations in Universal Grammar (IUG) raises many issues concerning theory and experimental design within generative approaches to language acquisition. I focus here on one of the strongest theoretical claims of the Modularity Matching Model (MMM): continuity of processing. For reasons different to Drozd, I argue that the assumption is tenuous. Furthermore, I argue that the focus of the MMM and the methodological prescriptions contained in IUG are too narrow to capture language acquisition.
  • Kidd, E., Lieven, E., & Tomasello, M. (2010). Lexical frequency and exemplar-based learning effects in language acquisition: evidence from sentential complements. Language Sciences, 32(1), 132-142. doi:10.1016/j.langsci.2009.05.002.

    Abstract

    Usage-based approaches to language acquisition argue that children acquire the grammar of their target language using general-cognitive learning principles. The current paper reports on an experiment that tested a central assumption of the usage-based approach: argument structure patterns are connected to high frequency verbs that facilitate acquisition. Sixty children (N = 60) aged 4- and 6-years participated in a sentence recall/lexical priming experiment that manipulated the frequency with which the target verbs occurred in the finite sentential complement construction in English. The results showed that the children performed better on sentences that contained high frequency verbs. Furthermore, the children’s performance suggested that their knowledge of finite sentential complements relies most heavily on one particular verb – think, supporting arguments made by Goldberg [Goldberg, A.E., 2006. Constructions at Work: The Nature of Generalization in Language. Oxford University Press, Oxford], who argued that skewed input facilitates language learning.
  • Kidd, E. (2003). Relative clause comprehension revisited: Commentary on Eisenberg (2002). Journal of Child Language, 30(3), 671-679. doi:10.1017/S0305000903005683.

    Abstract

    Eisenberg (2002) presents data from an experiment investigating three- and four-year-old children's comprehension of restrictive relative clauses (RC). From the results she argues, contrary to Hamburger & Crain (1982), that children do not have discourse knowledge of the felicity conditions of RCs before acquiring the syntax of relativization. This note evaluates this conclusion on the basis of the methodology used, and proposes that an account of syntactic development needs to be sensitive to the real-time processing requirements acquisition places on the learner.
  • Kidd, E., Tennant, E., & Nitschke, S. (2015). Shared abstract representation of linguistic structure in bilingual sentence comprehension. Psychonomic Bulletin & Review, 22(4), 1062-1067. doi:10.3758/s13423-014-0775-2.

    Abstract

    Although there is strong evidence for shared abstract grammatical structure in bilingual speakers from studies of sentence production, comparable evidence from studies of comprehension is lacking. Twenty-seven (N = 27) English-German bilingual adults participated in a structural priming study where unambiguous English subject and object relative clause (RC) structures were used to prime corresponding subject and object RC interpretations of structurally ambiguous German RCs. The results showed that English object RCs primed significantly greater object RC interpretations in German in comparison to baseline and subject RC prime conditions, but that English subject RC primes did not change the participants’ baseline preferences. This is the first study to report abstract crosslinguistic priming in comprehension. The results specifically suggest that word order overlap supports the integration of syntactic structures from different languages in bilingual speakers, and that these shared representations are used in comprehension as well as production
  • Kidd, E., Rogers, P., & Rogers, C. (2010). The personality correlates of adults who had imaginary companions in childhood. Psychological Reports, 107(1), 163-172. doi:10.2466/02.04.10.pr0.107.4.163-172.

    Abstract

    Two studies showed that adults who reported having an imaginary companion as a child differed from adults who did not on certain personality dimensions. The first yielded a higher mean on the Gough Creative Personality Scale for the group who had imaginary companions. Study 2 showed that such adults scored higher on the Achievement and Absorption subscales of Tellegen's Multidimensional Personality Questionnaire. The results suggest that some differences reported in the developmental literature may be observed in adults
  • Kircher, T. T. J., Brammer, M. J., Levelt, W. J. M., Bartels, M., & McGuire, P. K. (2004). Pausing for thought: Engagement of left temporal cortex during pauses in speech. NeuroImage, 21(1), 84-90. doi:10.1016/j.neuroimage.2003.09.041.

    Abstract

    Pauses during continuous speech, particularly those that occur within clauses, are thought to reflect the planning of forthcoming verbal output. We used functional Magnetic Resonance Imaging (fMRI) to examine their neural correlates. Six volunteers were scanned while describing seven Rorschach inkblots, producing 3 min of speech per inkblot. In an event-related design, the level of blood oxygenation level dependent (BOLD) contrast during brief speech pauses (mean duration 1.3 s, SD 0.3 s) during overt speech was contrasted with that during intervening periods of articulation. We then examined activity associated with pauses that occurred within clauses and pauses that occurred between grammatical junctions. Relative to articulation during speech, pauses were associated with activation in the banks of the left superior temporal sulcus (BA 39/22), at the temporoparietal junction. Continuous speech was associated with greater activation bilaterally in the inferior frontal (BA 44/45), middle frontal (BA 8) and anterior cingulate (BA 24) gyri, the middle temporal sulcus (BA 21/22), the occipital cortex and the cerebellum. Left temporal activation was evident during pauses that occurred within clauses but not during pauses at grammatical junctions. In summary, articulation during continuous speech involved frontal, temporal and cerebellar areas, while pausing was associated with activity in the left temporal cortex, especially when this occurred within a clause. The latter finding is consistent with evidence that within-clause pauses are a correlate of speech planning and in particular lexical retrieval.
  • Kita, S., & Ozyurek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48(1), 16-32. doi:10.1016/S0749-596X(02)00505-3.

    Abstract

    Gestures that spontaneously accompany speech convey information coordinated with the concurrent speech. There has been considerable theoretical disagreement about the process by which this informational coordination is achieved. Some theories predict that the information encoded in gesture is not influenced by how information is verbally expressed. However, others predict that gestures encode only what is encoded in speech. This paper investigates this issue by comparing informational coordination between speech and gesture across different languages. Narratives in Turkish, Japanese, and English were elicited using an animated cartoon as the stimulus. It was found that gestures used to express the same motion events were influenced simultaneously by (1) how features of motion events were expressed in each language, and (2) spatial information in the stimulus that was never verbalized. From this, it is concluded that gestures are generated from spatio-motoric processes that interact on-line with the speech production process. Through the interaction, spatio-motoric information to be expressed is packaged into chunks that are verbalizable within a processing unit for speech formulation. In addition, we propose a model of speech and gesture production as one of a class of frameworks that are compatible with the data.
  • Klein, W. (Ed.). (2004). Philologie auf neuen Wegen [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 136.
  • Klein, W. (Ed.). (2004). Universitas [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik (LiLi), 134.
  • Klein, W. (2004). Vom Wörterbuch zum digitalen lexikalischen System. Zeitschrift für Literaturwissenschaft und Linguistik, 136, 10-55.
  • Klein, W. (2003). Wozu braucht man eigentlich Flexionsmorphologie? Zeitschrift für Literaturwissenschaft und Linguistik, 131, 23-54.
  • Klein, M., Van der Vloet, M., Harich, B., Van Hulzen, K. J., Onnink, A. M. H., Hoogman, M., Guadalupe, T., Zwiers, M., Groothuismink, J. M., Verberkt, A., Nijhof, B., Castells-Nobau, A., Faraone, S. V., Buitelaar, J. K., Schenck, A., Arias-Vasquez, A., Franke, B., & Psychiatric Genomics Consortium ADHD Working Group (2015). Converging evidence does not support GIT1 as an ADHD risk gene. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 168, 492-507. doi:10.1002/ajmg.b.32327.

    Abstract

    Attention-Deficit/Hyperactivity Disorder (ADHD) is a common neuropsychiatric disorder with a complex genetic background. The G protein-coupled receptor kinase interacting ArfGAP 1 (GIT1) gene was previously associated with ADHD. We aimed at replicating the association of GIT1 with ADHD and investigated its role in cognitive and brain phenotypes. Gene-wide and single variant association analyses for GIT1 were performed for three cohorts: (1) the ADHD meta-analysis data set of the Psychiatric Genomics Consortium (PGC, N=19,210), (2) the Dutch cohort of the International Multicentre persistent ADHD CollaboraTion (IMpACT-NL, N=225), and (3) the Brain Imaging Genetics cohort (BIG, N=1,300). Furthermore, functionality of the rs550818 variant as an expression quantitative trait locus (eQTL) for GIT1 was assessed in human blood samples. By using Drosophila melanogaster as a biological model system, we manipulated Git expression according to the outcome of the expression result and studied the effect of Git knockdown on neuronal morphology and locomotor activity. Association of rs550818 with ADHD was not confirmed, nor did a combination of variants in GIT1 show association with ADHD or any related measures in either of the investigated cohorts. However, the rs550818 risk-genotype did reduce GIT1 expression level. Git knockdown in Drosophila caused abnormal synapse and dendrite morphology, but did not affect locomotor activity. In summary, we could not confirm GIT1 as an ADHD candidate gene, while rs550818 was found to be an eQTL for GIT1. Despite GIT1's regulation of neuronal morphology, alterations in gene expression do not appear to have ADHD-related behavioral consequences
  • Klein, W., & Schlieben-Lange, B. (1996). Das Ich und die Sprache. Zeitschrift für Literaturwissenschaft und Linguistik, 101, 1-5.
  • Klein, W. (1973). Eine Analyse der Kerne in Schillers "Räuber". Cahiers de linguistique théorique et appliquée, 10, 195-200.
  • Klein, W. (1985). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 15(59), 7-8.
  • Klein, W., & Winkler, S. (2010). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 158, 5-7.
  • Klein, W., & Winkler, S. (Eds.). (2010). Ambiguität [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 40(158).
  • Klein, W., & Franceschini, R. (Eds.). (2003). Einfache Sprache [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 131.
  • Klein, W. (2004). Auf der Suche nach den Prinzipien, oder: Warum die Geisteswissenschaften auf dem Rückzug sind. Zeitschrift für Literaturwissenschaft und Linguistik, 134, 19-44.
  • Klein, W. (2004). Im Lauf der Jahre. Linguistische Berichte, 200, 397-407.
  • Klein, W. (1985). Gesprochene Sprache - geschriebene Sprache. Zeitschrift für Literaturwissenschaft und Linguistik, 59, 9-35.
  • Klein, W. (2010). On times and arguments. Linguistics, 48, 1221-1253. doi:10.1515/LING.2010.040.

    Abstract

    Verbs are traditionally assumed to have an “argument structure”, which imposes various constraints on form and meaning of the noun phrases that go with the verb, and an “event structure”, which defines certain temporal characteristics of the “event” to which the verb relates. In this paper, I argue that these two structures should be brought together. The verb assigns descriptive properties to one or more arguments at one or more temporal intervals, hence verbs have an “argument-time structure”. This argument-time structure as well as the descriptive properties connected to it can be modified by various morphological and syntactic operations. This approach allows a relatively simple analysis of familiar but not well-defined temporal notions such as tense, aspect and Aktionsart. This will be illustrated for English. It will be shown that a few simple morphosyntactic operations on the argument-time structure might account for form and meaning of the perfect, the progressive, the passive and related constructions.
  • Klein, W., & Schlieben-Lange, B. (Eds.). (1996). Sprache und Subjektivität I [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (101).
  • Klein, W., & Schlieben-Lange, B. (Eds.). (1996). Sprache und Subjektivität II [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (102).
  • Klein, W. (Ed.). (1985). Schriftlichkeit [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (59).
  • Klein, W. (2004). Was die Geisteswissenschaften leider noch von den Naturwissenschaften unterscheidet. Gegenworte, 13, 79-84.
  • Klein, W. (Ed.). (1996). Zweitspracherwerb [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (104).
  • Knudsen, B., Fischer, M., & Aschersleben, G. (2015). The development of Arabic digit knowledge in 4-to-7-year-old children. Journal of numerical cognition, 1(1), 21-37. doi:10.5964/jnc.v1i1.4.

    Abstract

    Recent studies indicate that Arabic digit knowledge rather than non-symbolic number knowledge is a key foundation for arithmetic proficiency at the start of a child’s mathematical career. We document the developmental trajectory of 4- to 7-year-olds’ proficiency in accessing magnitude information from Arabic digits in five tasks differing in magnitude manipulation requirements. Results showed that children from 5 years onwards accessed magnitude information implicitly and explicitly, but that 5-year-olds failed to access magnitude information explicitly when numerical magnitude was contrasted with physical magnitude. Performance across tasks revealed a clear developmental trajectory: children traverse from first knowing the cardinal values of number words to recognizing Arabic digits to knowing their cardinal values and, concurrently, their ordinal position. Correlational analyses showed a strong within-child consistency, demonstrating that this pattern is not only reflected in group differences but also in individual performance.
  • Kong, X., Liu, Z., Huang, L., Wang, X., Yang, Z., Zhou, G., Zhen, Z., & Liu, J. (2015). Mapping Individual Brain Networks Using Statistical Similarity in Regional Morphology from MRI. PLoS One, 10(11): e0141840. doi:10.1371/journal.pone.0141840.

    Abstract

    Representing brain morphology as a network has the advantage that the regional morphology of ‘isolated’ structures can be described statistically based on graph theory. However, very few studies have investigated brain morphology from the holistic perspective of complex networks, particularly in individual brains. We proposed a new network framework for individual brain morphology. Technically, in the new network, nodes are defined as regions based on a brain atlas, and edges are estimated using our newly-developed inter-regional relation measure based on regional morphological distributions. This implementation allows nodes in the brain network to be functionally/anatomically homogeneous but different with respect to shape and size. We first demonstrated the new network framework in a healthy sample. Thereafter, we studied the graph-theoretical properties of the networks obtained and compared the results with previous morphological, anatomical, and functional networks. The robustness of the method was assessed via measurement of the reliability of the network metrics using a test-retest dataset. Finally, to illustrate potential applications, the networks were used to measure age-related changes in commonly used network metrics. Results suggest that the proposed method could provide a concise description of brain organization at a network level and be used to investigate interindividual variability in brain morphology from the perspective of complex networks. Furthermore, the method could open a new window into modeling the complexly distributed brain and facilitate the emerging field of human connectomics.

    Additional information

    https://www.nitrc.org/
  • Konopka, A. E., & Kuchinsky, S. E. (2015). How message similarity shapes the timecourse of sentence formulation. Journal of Memory and Language, 84, 1-23. doi:10.1016/j.jml.2015.04.003.
  • Kos, M., Vosse, T. G., Van den Brink, D., & Hagoort, P. (2010). About edible restaurants: Conflicts between syntax and semantics as revealed by ERPs. Frontiers in Psychology, 1, E222. doi:10.3389/fpsyg.2010.00222.

    Abstract

    In order to investigate conflicts between semantics and syntax, we recorded ERPs, while participants read Dutch sentences. Sentences containing conflicts between syntax and semantics (Fred eats in a sandwich…/ Fred eats a restaurant…) elicited an N400. These results show that conflicts between syntax and semantics not necessarily lead to P600 effects and are in line with the processing competition account. According to this parallel account the syntactic and semantic processing streams are fully interactive and information from one level can influence the processing at another level. The relative strength of the cues of the processing streams determines which level is affected most strongly by the conflict. The processing competition account maintains the distinction between the N400 as index for semantic processing and the P600 as index for structural processing.
  • Krott, A., Hagoort, P., & Baayen, R. H. (2004). Sublexical units and supralexical combinatories in the processing of interfixed Dutch compounds. Language and Cognitive Processes, 19(3), 453-471. doi:10.1080/769813936.

    Abstract

    This study addresses the supralexical inferential processes underlying wellformedness judgements and latencies for a specic sublexical unit that appears in Dutch compounds, the interfix. Production studies have shown that the selection of interfixes in novel Dutch compounds and the speed of
    this selection is primarily determined by the distribution of interfixes in existing compounds that share the left constituent with the target compound, i.e. the ‘‘left constituent family’’. In this paper, we consider the question whether constituent families also affect wellformedness decisions of novel as well as existing Dutch compounds in comprehension. We visually presented compounds containing interfixes that were either in line with the bias of the left constituent family or not. In the case of existing compounds, we also presented variants with replaced interfixes. As in production, the bias of the left constituent family emerged as a crucial predictor for both acceptance rates and response latencies. This result supports the hypothesis that, as in production, constituent families are (co-)activated in comprehension. We argue that this co-activation is part of a supralexical inferential process, and we discuss how our data might be interpreted within sublexical and supralexical theories of morphological processing.
  • Krott, A., Libben, G., Jarema, G., Dressler, W., Schreuder, R., & Baayen, R. H. (2004). Probability in the grammar of German and Dutch: Interfixation in triconstituent compounds. Language and Speech, 47(1), 83-106.

    Abstract

    This study addresses the possibility that interfixes in multiconstituent nominal compounds in German and Dutch are functional as markers of immediate constituent structure.We report a lexical statistical survey of interfixation in the lexicons of German and Dutch which shows that all interfixes of German and one interfix of Dutch are significantly more likely to appear at the major constituent boundary than expected under chance conditions. A series of experiments provides evidence that speakers of German and Dutch are sensitive to the probabilistic cues to constituent structure provided by the interfixes. Thus, our data provide evidence that probability is part and parcel of grammatical competence.
  • Kunert, R., & Slevc, L. R. (2015). A commentary on: “Neural overlap in processing music and speech”. Frontiers in Human Neuroscience, 9: 330. doi:10.3389/fnhum.2015.00330.
  • Kunert, R., Willems, R. M., Casasanto, D., Patel, A. D., & Hagoort, P. (2015). Music and language syntax interact in Broca’s Area: An fMRI study. PLoS One, 10(11): e0141069. doi:10.1371/journal.pone.0141069.

    Abstract

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
  • Ladd, D. R., Roberts, S. G., & Dediu, D. (2015). Correlational studies in typological and historical linguistics. Annual Review of Linguistics, 1, 221-241. doi:10.1146/annurev-linguist-030514-124819.

    Abstract

    We review a number of recent studies that have identified either correlations between different linguistic features (e.g., implicational universals) or correlations between linguistic features and nonlinguistic properties of speakers or their environment (e.g., effects of geography on vocabulary). We compare large-scale quantitative studies with more traditional theoretical and historical linguistic research and identify divergent assumptions and methods that have led linguists to be skeptical of correlational work. We also attempt to demystify statistical techniques and point out the importance of informed critiques of the validity of statistical approaches. Finally, we describe various methods used in recent correlational studies to deal with the fact that, because of contact and historical relatedness, individual languages in a sample rarely represent independent data points, and we show how these methods may allow us to explore linguistic prehistory to a greater time depth than is possible with orthodox comparative reconstruction.
  • Ladd, D. R., & Dediu, D. (2010). Reply to Järvikivi et al. (2010) [Web log message]. Plos One. Retrieved from http://www.plosone.org/article/comments/info%3Adoi%2F10.1371%2Fjournal.pone.0012603.
  • Lai, V. T., & Curran, T. (2015). Erratum to “ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors” [Brain Lang. 127 (3) (2013) 484–496]. Brain and Language, 149, 148-150. doi:10.1016/j.bandl.2014.11.001.
  • Lai, V. T., van Dam, W., Conant, L. L., Binder, J. R., & Desai, R. H. (2015). Familiarity differentially affects right hemisphere contributions to processing metaphors and literals. Frontiers in Human Neuroscience, 9: 44. doi:10.3389/fnhum.2015.00044.

    Abstract

    The role of the two hemispheres in processing metaphoric language is controversial. While some studies have reported a special role of the right hemisphere (RH) in processing metaphors, others indicate no difference in laterality relative to literal language. Some studies have found a role of the RH for novel/unfamiliar metaphors, but not
    conventional/familiar metaphors. It is not clear, however, whether the role of the RH
    is specific to metaphor novelty, or whether it reflects processing, reinterpretation or
    reanalysis of novel/unfamiliar language in general. Here we used functional magnetic
    resonance imaging (fMRI) to examine the effects of familiarity in both metaphoric and
    non-metaphoric sentences. A left lateralized network containing the middle and inferior
    frontal gyri, posterior temporal regions in the left hemisphere (LH), and inferior frontal
    regions in the RH, was engaged across both metaphoric and non-metaphoric sentences;
    engagement of this network decreased as familiarity decreased. No region was engaged
    selectively for greater metaphoric unfamiliarity. An analysis of laterality, however, showed that the contribution of the RH relative to that of LH does increase in a metaphorspecific manner as familiarity decreases. These results show that RH regions, taken by themselves, including commonly reported regions such as the right inferior frontal gyrus (IFG), are responsive to increased cognitive demands of processing unfamiliar stimuli, rather than being metaphor-selective. The division of labor between the two hemispheres, however, does shift towards the right for metaphoric processing. The shift results not because the RH contributes more to metaphoric processing. Rather, relative to
    its contribution for processing literals, the LH contributes less.
  • Lai, V. T., Willems, R. M., & Hagoort, P. (2015). Feel between the Lines: Implied emotion from combinatorial semantics. Journal of Cognitive Neuroscience, 27(8), 1528-1541. doi:10.1162/jocn_a_00798.

    Abstract

    This study investigated the brain regions for the comprehension of implied emotion in sentences. Participants read negative sentences without negative words, for example, “The boy fell asleep and never woke up again,” and their neutral counterparts “The boy stood up and grabbed his bag.” This kind of negative sentence allows us to examine implied emotion derived at the sentence level, without associative emotion coming from word retrieval. We found that implied emotion in sentences, relative to neutral sentences, led to activation in some emotion-related areas, including the medial prefrontal cortex, the amygdala, and the insula, as well as certain language-related areas, including the inferior frontal gyrus, which has been implicated in combinatorial processing. These results suggest that the emotional network involved in implied emotion is intricately related to the network for combinatorial processing in language, supporting the view that sentence meaning is more than simply concatenating the meanings of its lexical building blocks.
  • Lai, C. S. L., Gerrelli, D., Monaco, A. P., Fisher, S. E., & Copp, A. J. (2003). FOXP2 expression during brain development coincides with adult sites of pathology in a severe speech and language disorder. Brain, 126(11), 2455-2462. doi:10.1093/brain/awg247.

    Abstract

    Disruption of FOXP2, a gene encoding a forkhead-domain transcription factor, causes a severe developmental disorder of verbal communication, involving profound articulation deficits, accompanied by linguistic and grammatical impairments. Investigation of the neural basis of this disorder has been limited previously to neuroimaging of affected children and adults. The discovery of the gene responsible, FOXP2, offers a unique opportunity to explore the relevant neural mechanisms from a molecular perspective. In the present study, we have determined the detailed spatial and temporal expression pattern of FOXP2 mRNA in the developing brain of mouse and human. We find expression in several structures including the cortical plate, basal ganglia, thalamus, inferior olives and cerebellum. These data support a role for FOXP2 in the development of corticostriatal and olivocerebellar circuits involved in motor control. We find intriguing concordance between regions of early expression and later sites of pathology suggested by neuroimaging. Moreover, the homologous pattern of FOXP2/Foxp2 expression in human and mouse argues for a role for this gene in development of motor-related circuits throughout mammalian species. Overall, this study provides support for the hypothesis that impairments in sequencing of movement and procedural learning might be central to the FOXP2-related speech and language disorder.
  • Lam, K. J. Y., Dijkstra, T., & Rueschemeyer, S.-A. (2015). Feature activation during word recognition: action, visual, and associative-semantic priming effects. Frontiers in Psychology, 6: 659. doi:10.3389/fpsyg.2015.00659.

    Abstract

    Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information) contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1) action features, (2) visual features, or (3) semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100, 250, 400, and 1000 ms) to determine the relative time course of the different features. Notably, action priming effects were found in ISIs of 100, 250, and 1000 ms whereas a visual priming effect was seen only in the ISI of 1000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1) demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2) provides new evidence for embodied theories of language.
  • Lam, K. J. Y., & Dijkstra, T. (2010). Word repetition, masked orthographic priming, and language switching: Bilingual studies and BIA+ simulations. International Journal of Bilingual Education and Bilingualism, 13, 487-503. doi:10.1080/13670050.2010.488283.

    Abstract

    Daily conversations contain many repetitions of identical and similar word forms. For bilinguals, the words can even come from the same or different languages. How do such repetitions affect the human word recognition system? The Bilingual Interactive Activation Plus (BIA+) model provides a theoretical and computational framework for understanding word recognition and word repetition in bilinguals. The model assumes that both phenomena involve a language non-selective process that is sensitive to the task context. By means of computer simulations, the model can specify both qualitatively and quantitatively how bilingual lexical processing in one language is affected by the other language. Our review discusses how BIA+ handles cross-linguistic repetition and masked orthographic priming data from two key empirical studies. We show that BIA+ can account for repetition priming effects within- and between-languages through the manipulation of resting-level activations of targets and neighbors (words sharing all but one letter with the target). The model also predicts cross-linguistic performance on within- and between-trial orthographic priming without appealing to conscious strategies or task schema competition as an explanation. At the end of the paper, we briefly evaluate the model and indicate future developments.
  • Lammertink, I., Casillas, M., Benders, T., Post, B., & Fikkert, P. (2015). Dutch and English toddlers' use of linguistic cues in predicting upcoming turn transitions. Frontiers in Psychology, 6: 495. doi:10.3389/fpsyg.2015.00495.
  • De Lange, F. P., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Werf, S. P., Van der Meer, J. W. M., & Toni, I. (2004). Neural correlates of the chronic fatigue syndrom: An fMRI study. Brain, 127(9), 1948-1957. doi:10.1093/brain/awh225.

    Abstract

    Chronic fatigue syndrome (CFS) is characterized by a debilitating fatigue of unknown aetiology. Patients who suffer from CFS report a variety of physical complaints as well as neuropsychological complaints. Therefore, it is conceivable that the CNS plays a role in the pathophysiology of CFS. The purpose of this study was to investigate neural correlates of CFS, and specifically whether there exists a linkage between disturbances in the motor system and CFS. We measured behavioural performance and cerebral activity using rapid event-related functional MRI in 16 CFS patients and 16 matched healthy controls while they were engaged in a motor imagery task and a control visual imagery task. CFS patients were considerably slower on performance of both tasks, but the increase in reaction time with increasing task load was similar between the groups. Both groups used largely overlapping neural resources. However, during the motor imagery task, CFS patients evoked stronger responses in visually related structures. Furthermore, there was a marked between-groups difference during erroneous performance. In both groups, dorsal anterior cingulate cortex was specifically activated during error trials. Conversely, ventral anterior cingulate cortex was active when healthy controls made an error, but remained inactive when CFS patients made an error. Our results support the notion that CFS may be associated with dysfunctional motor planning. Furthermore, the between-groups differences observed during erroneous performance point to motivational disturbances as a crucial component of CFS.
  • Lartseva, A., Dijkstra, T., & Buitelaar, J. (2015). Emotional language processing in Autism Spectrum Disorders: A systematic review. Frontiers in Human Neuroscience, 8: 991. doi:10.3389/fnhum.2014.00991.

    Abstract

    In his first description of Autism Spectrum Disorders (ASD), Kanner emphasized emotional impairments by characterizing children with ASD as indifferent to other people, self-absorbed, emotionally cold, distanced, and retracted. Thereafter, emotional impairments became regarded as part of the social impairments of ASD, and research mostly focused on understanding how individuals with ASD recognize visual expressions of emotions from faces and body postures. However, it still remains unclear how emotions are processed outside of the visual domain. This systematic review aims to fill this gap by focusing on impairments of emotional language processing in ASD.
    We systematically searched PubMed for papers published between 1990 and 2013 using standardized search terms. Studies show that people with ASD are able to correctly classify emotional language stimuli as emotionally positive or negative. However, processing of emotional language stimuli in ASD is associated with atypical patterns of attention and memory performance, as well as abnormal physiological and neural activity. Particularly, younger children with ASD have difficulties in acquiring and developing emotional concepts, and avoid using these in discourse. These emotional language impairments were not consistently associated with age, IQ, or level of development of language skills.
    We discuss how emotional language impairments fit with existing cognitive theories of ASD, such as central coherence, executive dysfunction, and weak Theory of Mind. We conclude that emotional impairments in ASD may be broader than just a mere consequence of social impairments, and should receive more attention in future research
  • Lausberg, H., Cruz, R. F., Kita, S., Zaidel, E., & Ptito, A. (2003). Pantomime to visual presentation of objects: Left hand dyspraxia in patients with complete callosotomy. Brain, 126(2), 343-360. doi:10.1093/brain/awg042.

    Abstract

    Investigations of left hand praxis in imitation and object use in patients with callosal disconnection have yielded divergent results, inducing a debate between two theoretical positions. Whereas Liepmann suggested that the left hemisphere is motor dominant, others maintain that both hemispheres have equal motor competences and propose that left hand apraxia in patients with callosal disconnection is secondary to left hemispheric specialization for language or other task modalities. The present study aims to gain further insight into the motor competence of the right hemisphere by investigating pantomime of object use in split-brain patients. Three patients with complete callosotomy and, as control groups, five patients with partial callosotomy and nine healthy subjects were examined for their ability to pantomime object use to visual object presentation and demonstrate object manipulation. In each condition, 11 objects were presented to the subjects who pantomimed or demonstrated the object use with either hand. In addition, six object pairs were presented to test bimanual coordination. Two independent raters evaluated the videotaped movement demonstrations. While object use demonstrations were perfect in all three groups, the split-brain patients displayed apraxic errors only with their left hands in the pantomime condition. The movement analysis of concept and execution errors included the examination of ipsilateral versus contralateral motor control. As the right hand/left hemisphere performances demonstrated retrieval of the correct movement concepts, concept errors by the left hand were taken as evidence for right hemisphere control. Several types of execution errors reflected a lack of distal motor control indicating the use of ipsilateral pathways. While one split-brain patient controlled his left hand predominantly by ipsilateral pathways in the pantomime condition, the error profile in the other two split-brain patients suggested that the right hemisphere controlled their left hands. In the object use condition, in all three split-brain patients fine-graded distal movements in the left hand indicated right hemispheric control. Our data show left hand apraxia in split-brain patients is not limited to verbal commands, but also occurs in pantomime to visual presentation of objects. As the demonstration with object in hand was unimpaired in either hand, both hemispheres must contain movement concepts for object use. However, the disconnected right hemisphere is impaired in retrieving the movement concept in response to visual object presentation, presumably because of a deficit in associating perceptual object representation with the movement concepts.
  • Lausberg, H., Kita, S., Zaidel, E., & Ptito, A. (2003). Split-brain patients neglect left personal space during right-handed gestures. Neuropsychologia, 41(10), 1317-1329. doi:10.1016/S0028-3932(03)00047-2.

    Abstract

    Since some patients with right hemisphere damage or with spontaneous callosal disconnection neglect the left half of space, it has been suggested that the left cerebral hemisphere predominantly attends to the right half of space. However, clinical investigations of patients having undergone surgical callosal section have not shown neglect when the hemispheres are tested separately. These observations question the validity of theoretical models that propose a left hemispheric specialisation for attending to the right half of space. The present study aims to investigate neglect and the use of space by either hand in gestural demonstrations in three split-brain patients as compared to five patients with partial callosotomy and 11 healthy subjects. Subjects were asked to demonstrate with precise gestures and without speaking the content of animated scenes with two moving objects. The results show that in the absence of primary perceptual or representational neglect, split-brain patients neglect left personal space in right-handed gestural demonstrations. Since this neglect of left personal space cannot be explained by directional or spatial akinesia, it is suggested that it originates at the conceptual level, where the spatial coordinates for right-hand gestures are planned. The present findings are at odds with the position that the separate left hemisphere possesses adequate mechanisms for acting in both halves of space and neglect results from right hemisphere suppression of this potential. Rather, the results provide support for theoretical models that consider the left hemisphere as specialised for processing the right half of space during the execution of descriptive gestures.
  • Lausberg, H., & Kita, S. (2003). The content of the message influences the hand choice in co-speech gestures and in gesturing without speaking. Brain and Language, 86(1), 57-69. doi:10.1016/S0093-934X(02)00534-5.

    Abstract

    The present study investigates the hand choice in iconic gestures that accompany speech. In 10 right-handed subjects gestures were elicited by verbal narration and by silent gestural demonstrations of animations with two moving objects. In both conditions, the left-hand was used as often as the right-hand to display iconic gestures. The choice of the right- or left-hands was determined by semantic aspects of the message. The influence of hemispheric language lateralization on the hand choice in co-speech gestures appeared to be minor. Instead, speaking seemed to induce a sequential organization of the iconic gestures.
  • Lecumberri, M. L. G., Cooke, M., & Cutler, A. (Eds.). (2010). Non-native speech perception in adverse conditions [Special Issue]. Speech Communication, 52(11/12).
  • Lecumberri, M. L. G., Cooke, M., & Cutler, A. (2010). Non-native speech perception in adverse conditions: A review. Speech Communication, 52, 864-886. doi:10.1016/j.specom.2010.08.014.

    Abstract

    If listening in adverse conditions is hard, then listening in a foreign language is doubly so: non-native listeners have to cope with both imperfect signals and imperfect knowledge. Comparison of native and non-native listener performance in speech-in-noise tasks helps to clarify the role of prior linguistic experience in speech perception, and, more directly, contributes to an understanding of the problems faced by language learners in everyday listening situations. This article reviews experimental studies on non-native listening in adverse conditions, organised around three principal contributory factors: the task facing listeners, the effect of adverse conditions on speech, and the differences among listener populations. Based on a comprehensive tabulation of key studies, we identify robust findings, research trends and gaps in current knowledge.
  • Lee, S. A., Ferrari, A., Vallortigara, G., & Sovrano, V. A. (2015). Boundary primacy in spatial mapping: Evidence from zebrafish (Danio rerio). Behavioural Processes, 119, 116-122. doi:10.1016/j.beproc.2015.07.012.

    Abstract

    The ability to map locations in the surrounding environment is crucial for any navigating animal. Decades of research on mammalian spatial representations suggest that environmental boundaries play a major role in both navigation behavior and hippocampal place coding. Although the capacity for spatial mapping is shared among vertebrates, including birds and fish, it is not yet clear whether such similarities in competence reflect common underlying mechanisms. The present study tests cue specificity in spatial mapping in zebrafish, by probing their use of various visual cues to encode the location of a nearby conspecific. The results suggest that untrained zebrafish, like other vertebrates tested so far, rely primarily on environmental boundaries to compute spatial relationships and, at the same time, use other visible features such as surface markings and freestanding objects as local cues to goal locations. We propose that the pattern of specificity in spontaneous spatial mapping behavior across vertebrates reveals cross-species commonalities in its underlying neural representations.
  • Lev-Ari, S. (2015). Comprehending non-native speakers: Theory and evidence for adjustment in manner of processing. Frontiers in Psychology, 5: 1546. doi:10.3389/fpsyg.2014.01546.

    Abstract

    Non-native speakers have lower linguistic competence than native speakers, which renders their language less reliable in conveying their intentions. We suggest that expectations of lower competence lead listeners to adapt their manner of processing when they listen to non-native speakers. We propose that listeners use cognitive resources to adjust by increasing their reliance on top-down processes and extracting less information from the language of the non-native speaker. An eye-tracking study supports our proposal by showing that when following instructions by a non-native speaker, listeners make more contextually-induced interpretations. Those with relatively high working memory also increase their reliance on context to anticipate the speaker’s upcoming reference, and are less likely to notice lexical errors in the non-native speech, indicating that they take less information from the speaker’s language. These results contribute to our understanding of the flexibility in language processing and have implications for interactions between native and non-native speakers

    Additional information

    Data Sheet 1.docx
  • Lev-Ari, S., & Keysar, B. (2010). Why don't we believe non-native speakers? The influence of accent on credibility. Journal of Experimental Social Psychology, 46(6), 1093-1096. doi:10.1016/j.jesp.2010.05.025.

    Abstract

    Non-native speech is harder to understand than native speech. We demonstrate that this “processing
    difficulty” causes non-native speakers to sound less credible. People judged trivia statements such as “Ants
    don't sleep” as less true when spoken by a non-native than a native speaker. When people were made aware
    of the source of their difficulty they were able to correct when the accent was mild but not when it was
    heavy. This effect was not due to stereotypes of prejudice against foreigners because it occurred even though
    speakers were merely reciting statements provided by a native speaker. Such reduction of credibility may
    have an insidious impact on millions of people, who routinely communicate in a language which is not their
    native tongue
  • Levelt, W. J. M., Meyer, A. S., & Roelofs, A. (2004). Relations of lexical access to neural implementation and syntactic encoding [author's response]. Behavioral and Brain Sciences, 27, 299-301. doi:10.1017/S0140525X04270078.

    Abstract

    How can one conceive of the neuronal implementation of the processing model we proposed in our target article? In his commentary (Pulvermüller 1999, reprinted here in this issue), Pulvermüller makes various proposals concerning the underlying neural mechanisms and their potential localizations in the brain. These proposals demonstrate the compatibility of our processing model and current neuroscience. We add further evidence on details of localization based on a recent meta-analysis of neuroimaging studies of word production (Indefrey & Levelt 2000). We also express some minor disagreements with respect to Pulvermüller’s interpretation of the “lemma” notion, and concerning his neural modeling of phonological code retrieval. Branigan & Pickering discuss important aspects of syntactic encoding, which was not the topic of the target article. We discuss their well-taken proposal that multiple syntactic frames for a single verb lemma are represented as independent nodes, which can be shared with other verbs, such as accounting for syntactic priming in speech production. We also discuss how, in principle, the alternative multiple-frame-multiplelemma account can be tested empirically. The available evidence does not seem to support that account.
  • Levelt, W. J. M. (2004). Speech, gesture and the origins of language. European Review, 12(4), 543-549. doi:10.1017/S1062798704000468.

    Abstract

    During the second half of the 19th century, the psychology of language was invented as a discipline for the sole purpose of explaining the evolution of spoken language. These efforts culminated in Wilhelm Wundt’s monumental Die Sprache of 1900, which outlined the psychological mechanisms involved in producing utterances and considered how these mechanisms could have evolved. Wundt assumes that articulatory movements were originally rather arbitrary concomitants of larger, meaningful expressive bodily gestures. The sounds such articulations happened to produce slowly acquired the meaning of the gesture as a whole, ultimately making the gesture superfluous. Over a century later, gestural theories of language origins still abound. I argue that such theories are unlikely and wasteful, given the biological, neurological and genetic evidence.

Share this page