Publications

Displaying 301 - 400 of 654
  • Klein, W. (2000). Was uns die Sprache des Rechts über die Sprache sagt. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 115-149.
  • Klein, W. (2015). Von den Werken der Sprache. Stuttgart: Verlag J.B. Metzler.
  • Klein, W. (1998). Von der einfältigen Wißbegierde. Zeitschrift für Literaturwissenschaft und Linguistik, 112, 6-13.
  • Knudsen, B., Fischer, M., & Aschersleben, G. (2015). The development of Arabic digit knowledge in 4-to-7-year-old children. Journal of numerical cognition, 1(1), 21-37. doi:10.5964/jnc.v1i1.4.

    Abstract

    Recent studies indicate that Arabic digit knowledge rather than non-symbolic number knowledge is a key foundation for arithmetic proficiency at the start of a child’s mathematical career. We document the developmental trajectory of 4- to 7-year-olds’ proficiency in accessing magnitude information from Arabic digits in five tasks differing in magnitude manipulation requirements. Results showed that children from 5 years onwards accessed magnitude information implicitly and explicitly, but that 5-year-olds failed to access magnitude information explicitly when numerical magnitude was contrasted with physical magnitude. Performance across tasks revealed a clear developmental trajectory: children traverse from first knowing the cardinal values of number words to recognizing Arabic digits to knowing their cardinal values and, concurrently, their ordinal position. Correlational analyses showed a strong within-child consistency, demonstrating that this pattern is not only reflected in group differences but also in individual performance.
  • Koch, X., & Janse, E. (2015). Effects of age and hearing loss on articulatory precision for sibilants. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    This study investigates the effects of adult age and speaker abilities on articulatory precision for sibilant productions. Normal-hearing young adults with
    better sibilant discrimination have been shown to produce greater spectral sibilant contrasts. As reduced auditory feedback may gradually impact on feedforward
    commands, we investigate whether articulatory precision as indexed by spectral mean for [s] and [S] decreases with age, and more particularly with agerelated
    hearing loss. Younger, middle-aged and older adults read aloud words starting with the sibilants [s] or [S]. Possible effects of cognitive, perceptual, linguistic and sociolinguistic background variables
    on the sibilants’ acoustics were also investigated. Sibilant contrasts were less pronounced for male than female speakers. Most importantly, for the fricative
    [s], the spectral mean was modulated by individual high-frequency hearing loss, but not age. These results underscore that even mild hearing loss already affects articulatory precision.
  • Kong, X., Liu, Z., Huang, L., Wang, X., Yang, Z., Zhou, G., Zhen, Z., & Liu, J. (2015). Mapping Individual Brain Networks Using Statistical Similarity in Regional Morphology from MRI. PLoS One, 10(11): e0141840. doi:10.1371/journal.pone.0141840.

    Abstract

    Representing brain morphology as a network has the advantage that the regional morphology of ‘isolated’ structures can be described statistically based on graph theory. However, very few studies have investigated brain morphology from the holistic perspective of complex networks, particularly in individual brains. We proposed a new network framework for individual brain morphology. Technically, in the new network, nodes are defined as regions based on a brain atlas, and edges are estimated using our newly-developed inter-regional relation measure based on regional morphological distributions. This implementation allows nodes in the brain network to be functionally/anatomically homogeneous but different with respect to shape and size. We first demonstrated the new network framework in a healthy sample. Thereafter, we studied the graph-theoretical properties of the networks obtained and compared the results with previous morphological, anatomical, and functional networks. The robustness of the method was assessed via measurement of the reliability of the network metrics using a test-retest dataset. Finally, to illustrate potential applications, the networks were used to measure age-related changes in commonly used network metrics. Results suggest that the proposed method could provide a concise description of brain organization at a network level and be used to investigate interindividual variability in brain morphology from the perspective of complex networks. Furthermore, the method could open a new window into modeling the complexly distributed brain and facilitate the emerging field of human connectomics.

    Additional information

    https://www.nitrc.org/
  • Konopka, A. E., & Kuchinsky, S. E. (2015). How message similarity shapes the timecourse of sentence formulation. Journal of Memory and Language, 84, 1-23. doi:10.1016/j.jml.2015.04.003.
  • Köster, O., Hess, M. M., Schiller, N. O., & Künzel, H. J. (1998). The correlation between auditory speech sensitivity and speaker recognition ability. Forensic Linguistics: The international Journal of Speech, Language and the Law, 5, 22-32.

    Abstract

    In various applications of forensic phonetics the question arises as to how far aural-perceptual speaker recognition performance is reliable. Therefore, it is necessary to examine the relationship between speaker recognition results and human perception/production abilities like musicality or speech sensitivity. In this study, performance in a speaker recognition experiment and a speech sensitivity test are correlated. The results show a moderately significant positive correlation between the two tasks. Generally, performance in the speaker recognition task was better than in the speech sensitivity test. Professionals in speech and singing yielded a more homogeneous correlation than non-experts. Training in speech as well as choir-singing seems to have a positive effect on performance in speaker recognition. It may be concluded, firstly, that in cases where the reliability of voice line-up results or the credibility of a testimony have to be considered, the speech sensitivity test could be a useful indicator. Secondly, the speech sensitivity test might be integrated into the canon of possible procedures for the accreditation of forensic phoneticians. Both tests may also be used in combination.
  • Krämer, I. (1998). Children's interpretations of indefinite object noun phrases. Linguistics in the Netherlands, 1998, 163-174. doi:10.1075/avt.15.15kra.
  • Krämer, I. (2000). Interpreting indefinites: An experimental study of children's language comprehension. PhD Thesis, University of Utrecht, Utrecht. doi:10.17617/2.2057626.
  • Kruspe, N., Burenhult, N., & Wnuk, E. (2015). Northern Aslian. In P. Sidwell, & M. Jenny (Eds.), Handbook of Austroasiatic Languages (pp. 419-474). Leiden: Brill.
  • Kuijpers, C. T., Coolen, R., Houston, D., & Cutler, A. (1998). Using the head-turning technique to explore cross-linguistic performance differences. In C. Rovee-Collier, L. Lipsitt, & H. Hayne (Eds.), Advances in infancy research: Vol. 12 (pp. 205-220). Stamford: Ablex.
  • Kunert, R., & Slevc, L. R. (2015). A commentary on: “Neural overlap in processing music and speech”. Frontiers in Human Neuroscience, 9: 330. doi:10.3389/fnhum.2015.00330.
  • Kunert, R., Willems, R. M., Casasanto, D., Patel, A. D., & Hagoort, P. (2015). Music and language syntax interact in Broca’s Area: An fMRI study. PLoS One, 10(11): e0141069. doi:10.1371/journal.pone.0141069.

    Abstract

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
  • Ladd, D. R., Roberts, S. G., & Dediu, D. (2015). Correlational studies in typological and historical linguistics. Annual Review of Linguistics, 1, 221-241. doi:10.1146/annurev-linguist-030514-124819.

    Abstract

    We review a number of recent studies that have identified either correlations between different linguistic features (e.g., implicational universals) or correlations between linguistic features and nonlinguistic properties of speakers or their environment (e.g., effects of geography on vocabulary). We compare large-scale quantitative studies with more traditional theoretical and historical linguistic research and identify divergent assumptions and methods that have led linguists to be skeptical of correlational work. We also attempt to demystify statistical techniques and point out the importance of informed critiques of the validity of statistical approaches. Finally, we describe various methods used in recent correlational studies to deal with the fact that, because of contact and historical relatedness, individual languages in a sample rarely represent independent data points, and we show how these methods may allow us to explore linguistic prehistory to a greater time depth than is possible with orthodox comparative reconstruction.
  • Lai, V. T., & Curran, T. (2015). Erratum to “ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors” [Brain Lang. 127 (3) (2013) 484–496]. Brain and Language, 149, 148-150. doi:10.1016/j.bandl.2014.11.001.
  • Lai, V. T., van Dam, W., Conant, L. L., Binder, J. R., & Desai, R. H. (2015). Familiarity differentially affects right hemisphere contributions to processing metaphors and literals. Frontiers in Human Neuroscience, 9: 44. doi:10.3389/fnhum.2015.00044.

    Abstract

    The role of the two hemispheres in processing metaphoric language is controversial. While some studies have reported a special role of the right hemisphere (RH) in processing metaphors, others indicate no difference in laterality relative to literal language. Some studies have found a role of the RH for novel/unfamiliar metaphors, but not
    conventional/familiar metaphors. It is not clear, however, whether the role of the RH
    is specific to metaphor novelty, or whether it reflects processing, reinterpretation or
    reanalysis of novel/unfamiliar language in general. Here we used functional magnetic
    resonance imaging (fMRI) to examine the effects of familiarity in both metaphoric and
    non-metaphoric sentences. A left lateralized network containing the middle and inferior
    frontal gyri, posterior temporal regions in the left hemisphere (LH), and inferior frontal
    regions in the RH, was engaged across both metaphoric and non-metaphoric sentences;
    engagement of this network decreased as familiarity decreased. No region was engaged
    selectively for greater metaphoric unfamiliarity. An analysis of laterality, however, showed that the contribution of the RH relative to that of LH does increase in a metaphorspecific manner as familiarity decreases. These results show that RH regions, taken by themselves, including commonly reported regions such as the right inferior frontal gyrus (IFG), are responsive to increased cognitive demands of processing unfamiliar stimuli, rather than being metaphor-selective. The division of labor between the two hemispheres, however, does shift towards the right for metaphoric processing. The shift results not because the RH contributes more to metaphoric processing. Rather, relative to
    its contribution for processing literals, the LH contributes less.
  • Lai, V. T., Willems, R. M., & Hagoort, P. (2015). Feel between the Lines: Implied emotion from combinatorial semantics. Journal of Cognitive Neuroscience, 27(8), 1528-1541. doi:10.1162/jocn_a_00798.

    Abstract

    This study investigated the brain regions for the comprehension of implied emotion in sentences. Participants read negative sentences without negative words, for example, “The boy fell asleep and never woke up again,” and their neutral counterparts “The boy stood up and grabbed his bag.” This kind of negative sentence allows us to examine implied emotion derived at the sentence level, without associative emotion coming from word retrieval. We found that implied emotion in sentences, relative to neutral sentences, led to activation in some emotion-related areas, including the medial prefrontal cortex, the amygdala, and the insula, as well as certain language-related areas, including the inferior frontal gyrus, which has been implicated in combinatorial processing. These results suggest that the emotional network involved in implied emotion is intricately related to the network for combinatorial processing in language, supporting the view that sentence meaning is more than simply concatenating the meanings of its lexical building blocks.
  • Lai, C. S. L., Fisher, S. E., Hurst, J. A., Levy, E. R., Hodgson, S., Fox, M., Jeremiah, S., Povey, S., Jamison, D. C., Green, E. D., Vargha-Khadem, F., & Monaco, A. P. (2000). The SPCH1 region on human 7q31: Genomic characterization of the critical interval and localization of translocations associated with speech and language disorder. American Journal of Human Genetics, 67(2), 357-368. doi:10.1086/303011.

    Abstract

    The KE family is a large three-generation pedigree in which half the members are affected with a severe speech and language disorder that is transmitted as an autosomal dominant monogenic trait. In previously published work, we localized the gene responsible (SPCH1) to a 5.6-cM region of 7q31 between D7S2459 and D7S643. In the present study, we have employed bioinformatic analyses to assemble a detailed BAC-/PAC-based sequence map of this interval, containing 152 sequence tagged sites (STSs), 20 known genes, and >7.75 Mb of completed genomic sequence. We screened the affected chromosome 7 from the KE family with 120 of these STSs (average spacing <100 kb), but we did not detect any evidence of a microdeletion. Novel polymorphic markers were generated from the sequence and were used to further localize critical recombination breakpoints in the KE family. This allowed refinement of the SPCH1 interval to a region between new markers 013A and 330B, containing ∼6.1 Mb of completed sequence. In addition, we have studied two unrelated patients with a similar speech and language disorder, who have de novo translocations involving 7q31. Fluorescence in situ hybridization analyses with BACs/PACs from the sequence map localized the t(5;7)(q22;q31.2) breakpoint in the first patient (CS) to a single clone within the newly refined SPCH1 interval. This clone contains the CAGH44 gene, which encodes a brain-expressed protein containing a large polyglutamine stretch. However, we found that the t(2;7)(p23;q31.3) breakpoint in the second patient (BRD) resides within a BAC clone mapping >3.7 Mb distal to this, outside the current SPCH1 critical interval. Finally, we investigated the CAGH44 gene in affected individuals of the KE family, but we found no mutations in the currently known coding sequence. These studies represent further steps toward the isolation of the first gene to be implicated in the development of speech and language.
  • Lai, V. T., & Narasimhan, B. (2015). Verb representation and thinking-for-speaking effects in Spanish-English bilinguals. In R. G. De Almeida, & C. Manouilidou (Eds.), Cognitive science perspectives on verb representation and processing (pp. 235-256). Cham: Springer.

    Abstract

    Speakers of English habitually encode motion events using manner-of-motion verbs (e.g., spin, roll, slide) whereas Spanish speakers rely on path-of-motion verbs (e.g., enter, exit, approach). Here, we ask whether the language-specific verb representations used in encoding motion events induce different modes of “thinking-for-speaking” in Spanish–English bilinguals. That is, assuming that the verb encodes the most salient information in the clause, do bilinguals find the path of motion to be more salient than manner of motion if they had previously described the motion event using Spanish versus English? In our study, Spanish–English bilinguals described a set of target motion events in either English or Spanish and then participated in a nonlinguistic similarity judgment task in which they viewed the target motion events individually (e.g., a ball rolling into a cave) followed by two variants a “same-path” variant such as a ball sliding into a cave or a “same-manner” variant such as a ball rolling away from a cave). Participants had to select one of the two variants that they judged to be more similar to the target event: The event that shared the same path of motion as the target versus the one that shared the same manner of motion. Our findings show that bilingual speakers were more likely to classify two motion events as being similar if they shared the same path of motion and if they had previously described the target motion events in Spanish versus in English. Our study provides further evidence for the “thinking-for-speaking” hypothesis by demonstrating that bilingual speakers can flexibly shift between language-specific construals of the same event “on-the-fly.”
  • Lam, K. J. Y., Dijkstra, T., & Rueschemeyer, S.-A. (2015). Feature activation during word recognition: action, visual, and associative-semantic priming effects. Frontiers in Psychology, 6: 659. doi:10.3389/fpsyg.2015.00659.

    Abstract

    Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information) contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1) action features, (2) visual features, or (3) semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100, 250, 400, and 1000 ms) to determine the relative time course of the different features. Notably, action priming effects were found in ISIs of 100, 250, and 1000 ms whereas a visual priming effect was seen only in the ISI of 1000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1) demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2) provides new evidence for embodied theories of language.
  • Lammertink, I., Casillas, M., Benders, T., Post, B., & Fikkert, P. (2015). Dutch and English toddlers' use of linguistic cues in predicting upcoming turn transitions. Frontiers in Psychology, 6: 495. doi:10.3389/fpsyg.2015.00495.
  • Lansner, A., Sandberg, A., Petersson, K. M., & Ingvar, M. (2000). On forgetful attractor network memories. In H. Malmgren, M. Borga, & L. Niklasson (Eds.), Artificial neural networks in medicine and biology: Proceedings of the ANNIMAB-1 Conference, Göteborg, Sweden, 13-16 May 2000 (pp. 54-62). Heidelberg: Springer Verlag.

    Abstract

    A recurrently connected attractor neural network with a Hebbian learning rule is currently our best ANN analogy for a piece cortex. Functionally biological memory operates on a spectrum of time scales with regard to induction and retention, and it is modulated in complex ways by sub-cortical neuromodulatory systems. Moreover, biological memory networks are commonly believed to be highly distributed and engage many co-operating cortical areas. Here we focus on the temporal aspects of induction and retention of memory in a connectionist type attractor memory model of a piece of cortex. A continuous time, forgetful Bayesian-Hebbian learning rule is described and compared to the characteristics of LTP and LTD seen experimentally. More generally, an attractor network implementing this learning rule can operate as a long-term, intermediate-term, or short-term memory. Modulation of the print-now signal of the learning rule replicates some experimental memory phenomena, like e.g. the von Restorff effect.
  • Lartseva, A., Dijkstra, T., & Buitelaar, J. (2015). Emotional language processing in Autism Spectrum Disorders: A systematic review. Frontiers in Human Neuroscience, 8: 991. doi:10.3389/fnhum.2014.00991.

    Abstract

    In his first description of Autism Spectrum Disorders (ASD), Kanner emphasized emotional impairments by characterizing children with ASD as indifferent to other people, self-absorbed, emotionally cold, distanced, and retracted. Thereafter, emotional impairments became regarded as part of the social impairments of ASD, and research mostly focused on understanding how individuals with ASD recognize visual expressions of emotions from faces and body postures. However, it still remains unclear how emotions are processed outside of the visual domain. This systematic review aims to fill this gap by focusing on impairments of emotional language processing in ASD.
    We systematically searched PubMed for papers published between 1990 and 2013 using standardized search terms. Studies show that people with ASD are able to correctly classify emotional language stimuli as emotionally positive or negative. However, processing of emotional language stimuli in ASD is associated with atypical patterns of attention and memory performance, as well as abnormal physiological and neural activity. Particularly, younger children with ASD have difficulties in acquiring and developing emotional concepts, and avoid using these in discourse. These emotional language impairments were not consistently associated with age, IQ, or level of development of language skills.
    We discuss how emotional language impairments fit with existing cognitive theories of ASD, such as central coherence, executive dysfunction, and weak Theory of Mind. We conclude that emotional impairments in ASD may be broader than just a mere consequence of social impairments, and should receive more attention in future research
  • Lee, S. A., Ferrari, A., Vallortigara, G., & Sovrano, V. A. (2015). Boundary primacy in spatial mapping: Evidence from zebrafish (Danio rerio). Behavioural Processes, 119, 116-122. doi:10.1016/j.beproc.2015.07.012.

    Abstract

    The ability to map locations in the surrounding environment is crucial for any navigating animal. Decades of research on mammalian spatial representations suggest that environmental boundaries play a major role in both navigation behavior and hippocampal place coding. Although the capacity for spatial mapping is shared among vertebrates, including birds and fish, it is not yet clear whether such similarities in competence reflect common underlying mechanisms. The present study tests cue specificity in spatial mapping in zebrafish, by probing their use of various visual cues to encode the location of a nearby conspecific. The results suggest that untrained zebrafish, like other vertebrates tested so far, rely primarily on environmental boundaries to compute spatial relationships and, at the same time, use other visible features such as surface markings and freestanding objects as local cues to goal locations. We propose that the pattern of specificity in spontaneous spatial mapping behavior across vertebrates reveals cross-species commonalities in its underlying neural representations.
  • Lehecka, T. (2015). Collocation and colligation. In J.-O. Östman, & J. Verschueren (Eds.), Handbook of Pragmatics Online. Amsterdam: Benjamins. doi:10.1075/hop.19.col2.
  • Lev-Ari, S. (2015). Adjusting the manner of language processing to the social context: Attention allocation during interactions with non-native speakers. In R. K. Mishra, N. Srinivasan, & F. Huettig (Eds.), Attention and Vision in Language Processing (pp. 185-195). New York: Springer. doi:10.1007/978-81-322-2443-3_11.
  • Lev-Ari, S. (2015). Comprehending non-native speakers: Theory and evidence for adjustment in manner of processing. Frontiers in Psychology, 5: 1546. doi:10.3389/fpsyg.2014.01546.

    Abstract

    Non-native speakers have lower linguistic competence than native speakers, which renders their language less reliable in conveying their intentions. We suggest that expectations of lower competence lead listeners to adapt their manner of processing when they listen to non-native speakers. We propose that listeners use cognitive resources to adjust by increasing their reliance on top-down processes and extracting less information from the language of the non-native speaker. An eye-tracking study supports our proposal by showing that when following instructions by a non-native speaker, listeners make more contextually-induced interpretations. Those with relatively high working memory also increase their reliance on context to anticipate the speaker’s upcoming reference, and are less likely to notice lexical errors in the non-native speech, indicating that they take less information from the speaker’s language. These results contribute to our understanding of the flexibility in language processing and have implications for interactions between native and non-native speakers

    Additional information

    Data Sheet 1.docx
  • Levelt, W. J. M. (2000). Uit talloos veel miljoenen. Natuur & Techniek, 68(11), 90.
  • Levelt, W. J. M. (2000). Dyslexie. Natuur & Techniek, 68(4), 64.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M. (2000). Met twee woorden spreken [Simon Dik Lezing 2000]. Amsterdam: Vossiuspers AUP.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (2015). Levensbericht George Armitage Miller 1920 - 2012. In KNAW levensberichten en herdenkingen 2014 (pp. 38-42). Amsterdam: KNAW.
  • Levelt, W. J. M. (2000). Links en rechts: Waarom hebben we zo vaak problemen met die woorden? Natuur & Techniek, 68(7/8), 90.
  • Levelt, W. J. M. (2000). Introduction Section VII: Language. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences; 2nd ed. (pp. 843-844). Cambridge: MIT Press.
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M. (1967). Note on the distribution of dominance times in binocular rivalry. British Journal of Psychology, 58, 143-145.
  • Levelt, W. J. M. (1967). Over het waarnemen van zinnen [Inaugural lecture]. Groningen: Wolters.
  • Levelt, W. J. M. (2000). Psychology of language. In K. Pawlik, & M. R. Rosenzweig (Eds.), International handbook of psychology (pp. 151-167). London: SAGE publications.
  • Levelt, C. C., Schiller, N. O., & Levelt, W. J. M. (2000). The acquisition of syllable types. Language Acquisition, 8(3), 237-263. doi:10.1207/S15327817LA0803_2.

    Abstract

    In this article, we present an account of developmental data regarding the acquisition of syllable types. The data come from a longitudinal corpus of phonetically transcribed speech of 12 children acquiring Dutch as their first language. A developmental order of acquisition of syllable types was deduced by aligning the syllabified data on a Guttman scale. This order could be analyzed as following from an initial ranking and subsequent rerankings in the grammar of the structural constraints ONSET, NO-CODA, *COMPLEX-O, and *COMPLEX-C; some local conjunctions of these constraints; and a faithfulness constraint FAITH. The syllable type frequencies in the speech surrounding the language learner are also considered. An interesting correlation is found between the frequencies and the order of development of the different syllable types.
  • Levelt, W. J. M. (2000). The brain does not serve linguistic theory so easily [Commentary to target article by Grodzinksy]. Behavioral and Brain Sciences, 23(1), 40-41.
  • Levelt, W. J. M. (2015). Sleeping Beauties. In I. Toivonen, P. Csúrii, & E. Van der Zee (Eds.), Structures in the Mind: Essays on Language, Music, and Cognition in Honor of Ray Jackendoff (pp. 235-255). Cambridge, MA: MIT Press.
  • Levelt, W. J. M. (2000). Speech production. In A. E. Kazdin (Ed.), Encyclopedia of psychology (pp. 432-433). Oxford University Press.
  • Levelt, W. J. M., & Indefrey, P. (2000). The speaking mind/brain: Where do spoken words come from? In A. Marantz, Y. Miyashita, & W. O'Neil (Eds.), Image, language, brain: Papers from the First Mind Articulation Project Symposium (pp. 77-94). Cambridge, Mass.: MIT Press.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levelt, W. J. M., & Meyer, A. S. (2000). Word for word: Multiple lexical access in speech production. European Journal of Cognitive Psychology, 12(4), 433-452. doi:10.1080/095414400750050178.

    Abstract

    It is quite normal for us to produce one or two million word tokens every year. Speaking is a dear occupation and producing words is at the core of it. Still, producing even a single word is a highly complex affair. Recently, Levelt, Roelofs, and Meyer (1999) reviewed their theory of lexical access in speech production, which dissects the word-producing mechanism as a staged application of various dedicated operations. The present paper begins by presenting a bird eye's view of this mechanism. We then square the complexity by asking how speakers control multiple access in generating simple utterances such as a table and a chair. In particular, we address two issues. The first one concerns dependency: Do temporally contiguous access procedures interact in any way, or do they run in modular fashion? The second issue concerns temporal alignment: How much temporal overlap of processing does the system tolerate in accessing multiple content words, such as table and chair? Results from picture-word interference and eye tracking experiments provide evidence for restricted cases of dependency as well as for constraints on the temporal alignment of access procedures.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (2000). Language as nature and language as art. In J. Mittelstrass, & W. Singer (Eds.), Proceedings of the Symposium on ‘Changing concepts of nature and the turn of the Millennium (pp. 257-287). Vatican City: Pontificae Academiae Scientiarium Scripta Varia.
  • Levinson, S. C. (2000). H.P. Grice on location on Rossel Island. In S. S. Chang, L. Liaw, & J. Ruppenhofer (Eds.), Proceedings of the 25th Annual Meeting of the Berkeley Linguistic Society (pp. 210-224). Berkeley: Berkeley Linguistic Society.
  • Levinson, S. C. (2015). John Joseph Gumperz (1922–2013) [Obituary]. American Anthropologist, 117(1), 212-224. doi:10.1111/aman.12185.
  • Levinson, S. C. (2015). Other-initiated repair in Yélî Dnye: Seeing eye-to-eye in the language of Rossel Island. Open Linguistics, 1(1), 386-410. doi:10.1515/opli-2015-0009.

    Abstract

    Other-initiated repair (OIR) is the fundamental back-up system that ensures the effectiveness of human communication in its primordial niche, conversation. This article describes the interactional and linguistic patterns involved in other-initiated repair in Yélî Dnye, the Papuan language of Rossel Island, Papua New Guinea. The structure of the article is based on the conceptual set of distinctions described in Chapters 1 and 2 of the special issue, and describes the major properties of the Rossel Island system, and the ways in which OIR in this language both conforms to familiar European patterns and deviates from those patterns. Rossel Island specialities include lack of a Wh-word open class repair initiator, and a heavy reliance on visual signals that makes it possible both to initiate repair and confirm it non-verbally. But the overall system conforms to universal expectations.
  • Levinson, S. C. (2000). Presumptive meanings: The theory of generalized conversational implicature. Cambridge: MIT press.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C., & Torreira, F. (2015). Timing in turn-taking and its implications for processing models of language. Frontiers in Psychology, 6: 731. doi:10.3389/fpsyg.2015.00731.

    Abstract

    The core niche for language use is in verbal interaction, involving the rapid exchange of turns at talking. This paper reviews the extensive literature about this system, adding new statistical analyses of behavioural data where they have been missing, demonstrating that turn-taking has the systematic properties originally noted by Sacks, Schegloff and Jefferson (1974; hereafter SSJ). This system poses some significant puzzles for current theories of language processing: the gaps between turns are short (of the order of 200 ms), but the latencies involved in language production are much longer (over 600 ms). This seems to imply that participants in conversation must predict (or ‘project’ as SSJ have it) the end of the current speaker’s turn in order to prepare their response in advance. This in turn implies some overlap between production and comprehension despite their use of common processing resources. Collecting together what is known behaviourally and experimentally about the system, the space for systematic explanations of language processing for conversation can be significantly narrowed, and we sketch some first model of the mental processes involved for the participant preparing to speak next.
  • Levinson, S. C. (2000). Yélî Dnye and the theory of basic color terms. Journal of Linguistic Anthropology, 10( 1), 3-55. doi:10.1525/jlin.2000.10.1.3.

    Abstract

    The theory of basic color terms was a crucial factor in the demise of linguistic relativity. The theory is now once again under scrutiny and fundamental revision. This article details a case study that undermines one of the central claims of the classical theory, namely that languages universally treat color as a unitary domain, to be exhaustively named. Taken together with other cases, the study suggests that a number of languages have only an incipient color terminology, raising doubts about the linguistic universality of such terminology.
  • Lewis, A. G., & Bastiaansen, M. C. M. (2015). A predictive coding framework for rapid neural dynamics during sentence-level language comprehension. Cortex, 68, 155-168. doi:10.1016/j.cortex.2015.02.014.

    Abstract

    There is a growing literature investigating the relationship between oscillatory neural dynamics measured using EEG and/or MEG, and sentence-level language comprehension. Recent proposals have suggested a strong link between predictive coding accounts of the hierarchical flow of information in the brain, and oscillatory neural dynamics in the beta and gamma frequency ranges. We propose that findings relating beta and gamma oscillations to sentence-level language comprehension might be unified under such a predictive coding account. Our suggestion is that oscillatory activity in the beta frequency range may reflect both the active maintenance of the current network configuration responsible for representing the sentence-level meaning under construction, and the top-down propagation of predictions to hierarchically lower processing levels based on that representation. In addition, we suggest that oscillatory activity in the low and middle gamma range reflect the matching of top-down predictions with bottom-up linguistic input, while evoked high gamma might reflect the propagation of bottom-up prediction errors to higher levels of the processing hierarchy. We also discuss some of the implications of this predictive coding framework, and we outline ideas for how these might be tested experimentally
  • Lewis, A. G., Wang, L., & Bastiaansen, M. C. M. (2015). Fast oscillatory dynamics during language comprehension: Unification versus maintenance and prediction? Brain and Language, 148, 51-63. doi:10.1016/j.bandl.2015.01.003.

    Abstract

    The role of neuronal oscillations during language comprehension is not yet well understood. In this paper we review and reinterpret the functional roles of beta- and gamma-band oscillatory activity during language comprehension at the sentence and discourse level. We discuss the evidence in favor of a role for beta and gamma in unification (the unification hypothesis), and in light of mounting evidence that cannot be accounted for under this hypothesis, we explore an alternative proposal linking beta and gamma oscillations to maintenance and prediction (respectively) during language comprehension. Our maintenance/prediction hypothesis is able to account for most of the findings that are currently available relating beta and gamma oscillations to language comprehension, and is in good agreement with other proposals about the roles of beta and gamma in domain-general cognitive processing. In conclusion we discuss proposals for further testing and comparing the prediction and unification hypotheses.
  • Lima, C. F., Lavan, N., Evans, S., Agnew, Z., Halpern, A. R., Shanmugalingam, P., Meekings, S., Boebinger, D., Ostarek, M., McGettigan, C., Warren, J. E., & Scott, S. K. (2015). Feel the Noise: Relating individual differences in auditory imagery to the structure and function of sensorimotor systems. Cerebral Cortex., 2015(25), 4638-4650. doi:10.1093/cercor/bhv134.

    Abstract

    Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual–motor interactions for processing heard and internally generated auditory information.
  • Liszkowski, U. (2000). A belief about theory of mind: The relation between children's inhibitory control and their common sense psychological knowledge. Master Thesis, University of Essex.
  • Liszkowski, U., & Ramenzoni, V. C. (2015). Pointing to nothing? Empty places prime infants' attention to absent objects. Infancy, 20, 433-444. doi:10.1111/infa.12080.

    Abstract

    People routinely point to empty space when referring to absent entities. These points to "nothing" are meaningful because they direct attention to places that stand in for specific entities. Typically, the meaning of places in terms of absent referents is established through preceding discourse and accompanying language. However, it is unknown whether nonlinguistic actions can establish locations as meaningful places, and whether infants have the capacity to represent a place as standing in for an object. In a novel eye-tracking paradigm, 18-month-olds watched objects being placed in specific locations. Then, the objects disappeared and a point directed infants' attention to an emptied place. The point to the empty place primed infants in a subsequent scene (in which the objects appeared at novel locations) to look more to the object belonging to the indicated place than to a distracter referent. The place-object expectations were strong enough to interfere when reversing the place-object associations. Findings show that infants comprehend nonlinguistic reference to absent entities, which reveals an ontogenetic early, nonverbal understanding of places as representations of absent objects
  • Little, H., Eryılmaz, K., & de Boer, B. (2015). A new artificial sign-space proxy for investigating the emergence of structure and categories in speech. In The Scottish Consortium for ICPhS 2015 (Ed.), The proceedings of the 18th International Congress of Phonetic Sciences. (ICPhS 2015).
  • Little, H., Eryılmaz, K., & de Boer, B. (2015). Linguistic modality affects the creation of structure and iconicity in signals. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. Jennings, & P. Maglio (Eds.), The 37th annual meeting of the Cognitive Science Society (CogSci 2015) (pp. 1392-1398). Austin, TX: Cognitive Science Society.

    Abstract

    Different linguistic modalities (speech or sign) offer different levels at which signals can iconically represent the world. One hypothesis argues that this iconicity has an effect on how linguistic structure emerges. However, exactly how and why these effects might come about is in need of empirical investigation. In this contribution, we present a signal creation experiment in which both the signalling space and the meaning space are manipulated so that different levels and types of iconicity are available between the signals and meanings. Signals are produced using an infrared sensor that detects the hand position of participants to generate auditory feedback. We find evidence that iconicity may be maladaptive for the discrimination of created signals. Further, we implemented Hidden Markov Models to characterise the structure within signals, which was also used to inform a metric for iconicity.
  • Little, H. (Ed.). (2015). Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015) Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences. Glasgow: ICPhS.
  • Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: A review of behavioural, developmental, and neuroimaging research into sound-symbolism. Frontiers in Psychology, 6: 1246. doi:10.3389/fpsyg.2015.01246.

    Abstract

    This review covers experimental approaches to sound-symbolism—from infants to adults, and from Sapir’s foundational studies to twenty-first century product naming. It synthesizes recent behavioral, developmental, and neuroimaging work into a systematic overview of the cross-modal correspondences that underpin iconic links between form and meaning. It also identifies open questions and opportunities, showing how the future course of experimental iconicity research can benefit from an integrated interdisciplinary perspective. Combining insights from psychology and neuroscience with evidence from natural languages provides us with opportunities for the experimental investigation of the role of sound-symbolism in language learning, language processing, and communication. The review finishes by describing how hypothesis-testing and model-building will help contribute to a cumulative science of sound-symbolism in human language.
  • Lockwood, G., & Tuomainen, J. (2015). Ideophones in Japanese modulate the P2 and late positive complex responses. Frontiers in Psychology, 6: 933. doi:10.3389/fpsyg.2015.00933.

    Abstract

    Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response and a sustained late positive complex in comparison to arbitrary adverbs. These results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of ideophones in comparison to arbitrary words. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds, and that these effects are detectable in natural language.
  • Love, B. C., Kopeć, Ł., & Guest, O. (2015). Optimism bias in fans and sports reporters. PLoS One, 10(9): e0137685. doi:10.1371/journal.pone.0137685.

    Abstract

    People are optimistic about their prospects relative to others. However, existing studies can be difficult to interpret because outcomes are not zero-sum. For example, one person avoiding cancer does not necessitate that another person develops cancer. Ideally, optimism bias would be evaluated within a closed formal system to establish with certainty the extent of the bias and the associated environmental factors, such that optimism bias is demonstrated when a population is internally inconsistent. Accordingly, we asked NFL fans to predict how many games teams they liked and disliked would win in the 2015 season. Fans, like ESPN reporters assigned to cover a team, were overly optimistic about their team’s prospects. The opposite pattern was found for teams that fans disliked. Optimism may flourish because year-to-year team results are marked by auto-correlation and regression to the group mean (i.e., good teams stay good, but bad teams improve).

    Additional information

    raw data
  • Lozano, R., Vino, A., Lozano, C., Fisher, S. E., & Deriziotis, P. (2015). A de novo FOXP1 variant in a patient with autism, intellectual disability and severe speech and language impairment. European Journal of Human Genetics, 23, 1702-1707. doi:10.1038/ejhg.2015.66.

    Abstract

    FOXP1 (forkhead box protein P1) is a transcription factor involved in the development of several tissues, including the brain. An emerging phenotype of patients with protein-disrupting FOXP1 variants includes global developmental delay, intellectual disability and mild to severe speech/language deficits. We report on a female child with a history of severe hypotonia, autism spectrum disorder and mild intellectual disability with severe speech/language impairment. Clinical exome sequencing identified a heterozygous de novo FOXP1 variant c.1267_1268delGT (p.V423Hfs*37). Functional analyses using cellular models show that the variant disrupts multiple aspects of FOXP1 activity, including subcellular localization and transcriptional repression properties. Our findings highlight the importance of performing functional characterization to help uncover the biological significance of variants identified by genomics approaches, thereby providing insight into pathways underlying complex neurodevelopmental disorders. Moreover, our data support the hypothesis that de novo variants represent significant causal factors in severe sporadic disorders and extend the phenotype seen in individuals with FOXP1 haploinsufficiency
  • Magyari, L. (2015). Timing turns in conversation: A temporal preparation account. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Majid, A., & Van Staden, M. (2015). Can nomenclature for the body be explained by embodiment theories? Topics in Cognitive Science, 7(4), 570-594. doi:10.1111/tops.12159.

    Abstract

    According to widespread opinion, the meaning of body part terms is determined by salient discontinuities in the visual image; such that hands, feet, arms, and legs, are natural parts. If so, one would expect these parts to have distinct names which correspond in meaning across languages. To test this proposal, we compared three unrelated languages—Dutch, Japanese, and Indonesian—and found both naming systems and boundaries of even basic body part terms display variation across languages. Bottom-up cues alone cannot explain natural language semantic systems; there simply is not a one-to-one mapping of the body semantic system to the body structural description. Although body parts are flexibly construed across languages, body parts semantics are, nevertheless, constrained by non-linguistic representations in the body structural description, suggesting these are necessary, although not sufficient, in accounting for aspects of the body lexicon.
  • Majid, A. (2015). Comparing lexicons cross-linguistically. In J. R. Taylor (Ed.), The Oxford Handbook of the Word (pp. 364-379). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199641604.013.020.

    Abstract

    The lexicon is central to the concerns of disparate disciplines and has correspondingly elicited conflicting proposals about some of its foundational properties. Some suppose that word meanings and their associated concepts are largely universal, while others note that local cultural interests infiltrate every category in the lexicon. This chapter reviews research in two semantic domains—perception and the body—in order to illustrate crosslinguistic similarities and differences in semantic fields. Data is considered from a wide array of languages, especially those from small-scale indigenous communities which are often overlooked. In every lexical field we find considerable variation across cultures, raising the question of where this variation comes from. Is it the result of different ecological or environmental niches, cultural practices, or accidents of historical pasts? Current evidence suggests that diverse pressures differentially shape lexical fields.
  • Majid, A. (2015). Cultural factors shape olfactory language. Trends in Cognitive Sciences, 19(11), 629-630. doi:10.1016/j.tics.2015.06.009.
  • Majid, A., Jordan, F., & Dunn, M. (Eds.). (2015). Semantic systems in closely related languages [Special Issue]. Language Sciences, 49.
  • Majid, A., Jordan, F., & Dunn, M. (2015). Semantic systems in closely related languages. Language Sciences, 49, 1-18. doi:10.1016/j.langsci.2014.11.002.

    Abstract

    In each semantic domain studied to date, there is considerable variation in how meanings are expressed across languages. But are some semantic domains more likely to show variation than others? Is the domain of space more or less variable in its expression than other semantic domains, such as containers, body parts, or colours? According to many linguists, the meanings expressed in grammaticised expressions, such as (spatial) adpositions, are more likely to be similar across languages than meanings expressed in open class lexical items. On the other hand, some psychologists predict there ought to be more variation across languages in the meanings of adpositions, than in the meanings of nouns. This is because relational categories, such as those expressed as adpositions, are said to be constructed by language; whereas object categories expressed as nouns are predicted to be “given by the world”. We tested these hypotheses by comparing the semantic systems of closely related languages. Previous cross-linguistic studies emphasise the importance of studying diverse languages, but we argue that a focus on closely related languages is advantageous because domains can be compared in a culturally- and historically-informed manner. Thus we collected data from 12 Germanic languages. Naming data were collected from at least 20 speakers of each language for containers, body-parts, colours, and spatial relations. We found the semantic domains of colour and body-parts were the most similar across languages. Containers showed some variation, but spatial relations expressed in adpositions showed the most variation. The results are inconsistent with the view expressed by most linguists. Instead, we find meanings expressed in grammaticised meanings are more variable than meanings in open class lexical items.
  • Malt, B. C., Gennari, S., Imai, M., Ameel, E., Saji, N., & Majid, A. (2015). Where are the concepts? What words can and can’t reveal. In E. Margolis, & S. Laurence (Eds.), The conceptual Mind: New directions in the study of concepts (pp. 291-326). Cambridge, MA: MIT Press.

    Abstract

    Concepts are so fundamental to human cognition that Fodor declared the heart of a cognitive science to be its theory of concepts. To study concepts, though, cognitive scientists need to be able to identify some. The prevailing assumption has been that they are revealed by words such as triangle, table, and robin. But languages vary dramatically in how they carve up the world with names. Either ordinary concepts must be heavily language dependent, or names cannot be a direct route to concepts. We asked speakers of English, Dutch, Spanish, and Japanese to name a set of 36 video clips of human locomotion and to judge the similarities among them. We investigated what name inventories, name extensions, scaling solutions on name similarity, and scaling solutions on nonlinguistic similarity from the groups, individually and together, suggest about the underlying concepts. Aggregated naming data and similarity solutions converged on results distinct from individual languages.
  • Manrique, E., & Enfield, N. J. (2015). Suspending the next turn as a form of repair initiation: Evidence from Argentine Sign Language. Frontiers in Psychology, 6: 1326. doi:10.3389/fpsyg.2015.01326.

    Abstract

    Practices of other initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Sehas Argentina or LSA). We describe a type of response termed a "freeze-look,' which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a "thinking" face or hesitation, etc.). We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The "freeze-look" results in the questioner "re-doing" their action of asking a question, for example by repeating or rephrasing it Thus, we argue that the "freeze-look" is a practice for other-initiation of repair. In addition, we argue that it is an "off-record" practice, thus contrasting with known on record practices such as saying "Huh?" or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as wel

    Additional information

    Manrique_Enfield_2015_supp.pdf
  • Martin, J.-R., Kösem, A., & van Wassenhove, V. (2015). Hysteresis in Audiovisual Synchrony Perception. PLoS One, 10(3): e0119365. doi:10.1371/journal.pone.0119365.

    Abstract

    The effect of stimulation history on the perception of a current event can yield two opposite effects, namely: adaptation or hysteresis. The perception of the current event thus goes in the opposite or in the same direction as prior stimulation, respectively. In audiovisual (AV) synchrony perception, adaptation effects have primarily been reported. Here, we tested if perceptual hysteresis could also be observed over adaptation in AV timing perception by varying different experimental conditions. Participants were asked to judge the synchrony of the last (test) stimulus of an AV sequence with either constant or gradually changing AV intervals (constant and dynamic condition, respectively). The onset timing of the test stimulus could be cued or not (prospective vs. retrospective condition, respectively). We observed hysteretic effects for AV synchrony judgments in the retrospective condition that were independent of the constant or dynamic nature of the adapted stimuli; these effects disappeared in the prospective condition. The present findings suggest that knowing when to estimate a stimulus property has a crucial impact on perceptual simultaneity judgments. Our results extend beyond AV timing perception, and have strong implications regarding the comparative study of hysteresis and adaptation phenomena.
  • Martin, R. C., & Tan, Y. (2015). Sentence comprehension deficits: Independence and interaction of syntax, semantics, and working memory. In A. E. Hillis (Ed.), Handbook of adult language disorders (2nd ed., pp. 303-327). Boca Raton: CRC Press.
  • Matić, D. (2015). Information structure in linguistics. In J. D. Wright (Ed.), The International Encyclopedia of Social and Behavioral Sciences (2nd ed.) Vol. 12 (pp. 95-99). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.53013-X.

    Abstract

    Information structure is a subfield of linguistic research dealing with the ways speakers encode instructions to the hearer on how to process the message relative to their temporary mental states. To this end, sentences are segmented into parts conveying known and yet-unknown information, usually labeled ‘topic’ and ‘focus.’ Many languages have developed specialized grammatical and lexical means of indicating this segmentation.
  • Matić, D., & Odé, C. (2015). On prosodic signalling of focus in Tundra Yukaghir. Acta Linguistica Petropolitana, 11(2), 627-644.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Positive and negative influences of the lexicon on phonemic decision-making. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 3 (pp. 778-781). Beijing: China Military Friendship Publish.

    Abstract

    Lexical knowledge influences how human listeners make decisions about speech sounds. Positive lexical effects (faster responses to target sounds in words than in nonwords) are robust across several laboratory tasks, while negative effects (slower responses to targets in more word-like nonwords than in less word-like nonwords) have been found in phonetic decision tasks but not phoneme monitoring tasks. The present experiments tested whether negative lexical effects are therefore a task-specific consequence of the forced choice required in phonetic decision. We compared phoneme monitoring and phonetic decision performance using the same Dutch materials in each task. In both experiments there were positive lexical effects, but no negative lexical effects. We observe that in all studies showing negative lexical effects, the materials were made by cross-splicing, which meant that they contained perceptual evidence supporting the lexically-consistent phonemes. Lexical knowledge seems to influence phonemic decision-making only when there is evidence for the lexically-consistent phoneme in the speech signal.
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Why Merge really is autonomous and parsimonious. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 47-50). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    We briefly describe the Merge model of phonemic decision-making, and, in the light of general arguments about the possible role of feedback in spoken-word recognition, defend Merge's feedforward structure. Merge not only accounts adequately for the data, without invoking feedback connections, but does so in a parsimonious manner.
  • Meekings, S., Boebinger, D., Evans, S., Lima, C. F., Chen, S., Ostarek, M., & Scott, S. K. (2015). Do we know what we’re saying? The roles of attention and sensory information during speech production. Psychological Science, 26(12), 1975-1977. doi:10.1177/0956797614563766.
  • Meira, S., & Drude, S. (2015). A summary reconstruction of Proto-Maweti-Guarani segmental phonology. Boletim do Museu Paraense Emilio Goeldi:Ciencias Humanas, 10, 275-296. doi: 10.1590/1981-81222015000200005.

    Abstract

    This paper presents a succinct reconstruction of the segmental phonology of Proto-Maweti-Guarani, the hypothetical protolanguage from which modern Mawe, Aweti and the Tupi-Guarani branches of the Tupi linguistic family have evolved. Based on about 300 cognate sets from the authors' field data (for Mawe and Aweti) and from Mello's reconstruction (2000) for Proto-Tupi-Guarani (with additional information from other works; and with a few changes concerning certain doubtful features, such as the status of stem-final lenis consonants ∗r and ∗β, and the distinction of ∗c and ∗č), the consonants and vowels of Proto-Maweti-Guarani were reconstructed with the help of the traditional historical-comparative method. The development of the reconstructed segments is then traced from the protolanguage to each of the modern branches. A comparison with other claims made about Proto-Maweti-Guarani is given in the conclusion
  • Meyer, A. S., & Levelt, W. J. M. (2000). Merging speech perception and production [Comment on Norris, McQueen and Cutler]. Behavioral and Brain Sciences, 23(3), 339-340. doi:10.1017/S0140525X00373241.

    Abstract

    A comparison of Merge, a model of comprehension, and WEAVER, a model of production, raises five issues: (1) merging models of comprehension and production necessarily creates feedback; (2) neither model is a comprehensive account of word processing; (3) the models are incomplete in different ways; (4) the models differ in their handling of competition; (5) as opposed to WEAVER, Merge is a model of metalinguistic behavior.
  • Meyer, A. S., & Van der Meulen, F. (2000). Phonological priming effects on speech onset latencies and viewing times in object naming. Psychonomic Bulletin & Review, 7, 314-319.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Mielcarek, M., Toczek, M., Smeets, C. J. L. M., Franklin, S. A., Bondulich, M. K., Jolinon, N., Muller, T., Ahmed, M., Dick, J. R. T., Piotrowska, I., Greensmith, L., Smolenski, R. T., & Bates, G. P. (2015). HDAC4-Myogenin Axis As an Important Marker of HD-Related Skeletal Muscle Atrophy. PLoS Genetics, 11(3): e1005021. doi:10.1371/journal.pgen.1005021.

    Abstract

    Skeletal muscle remodelling and contractile dysfunction occur through both acute and chronic disease processes. These include the accumulation of insoluble aggregates of mis- folded amyloid proteins that is a pathological feature of Huntington ’ s disease (HD). While HD has been described primarily as a neurological disease, HD patients ’ exhibit pro- nounced skeletal muscle atrophy. Given that huntingtin is a ubiquitously expressed protein, skeletal muscle fibres may be at risk of a cell autonomous HD-related dysfunction. However the mechanism leading to skeletal muscle abnormalities in the clinical and pre-clinical HD settings remains unknown. To unravel this mechanism, we employed the R6/2 transgenic and Hdh Q150 knock-in mouse models of HD. We found that symptomatic animals devel- oped a progressive impairment of the contractile characteristics of the hind limb muscles tibialis anterior (TA) and extensor digitorum longus (EDL), accompanied by a significant loss of motor units in the EDL. In symptomatic animals, these pronounced functional changes were accompanied by an aberrant deregulation of contractile protein transcripts and their up-stream transcriptional regulators. In addition, HD mouse models develop a sig- nificant reduction in muscle force, possibly as a result of a deterioration in energy metabo- lism and decreased oxidation that is accompanied by the re-expression of the HDAC4- DACH2-myogenin axis. These results show that muscle dysfunction is a key pathological feature of HD.
  • Mishra, R., Srinivasan, N., & Huettig, F. (Eds.). (2015). Attention and vision in language processing. Berlin: Springer. doi:10.1007/978-81-322-2443-3.
  • Moers, C., Janse, E., & Meyer, A. S. (2015). Probabilistic reduction in reading aloud: A comparison of younger and older adults. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetics Association.

    Abstract

    Frequent and predictable words are generally pronounced with less effort and are therefore acoustically more reduced than less frequent or unpredictable words. Local predictability can be operationalised by Transitional Probability (TP), which indicates how likely a word is to occur given its immediate context. We investigated whether and how probabilistic reduction effects on word durations change with adult age when reading aloud content words embedded in sentences. The results showed equally large frequency effects on verb and noun durations for both younger (Mage = 20 years) and older (Mage = 68 years) adults. Backward TP also affected word duration for younger and older adults alike. ForwardTP, however, had no significant effect on word duration in either age group. Our results resemble earlier findings of more robust BackwardTP effects compared to ForwardTP effects. Furthermore, unlike often reported decline in predictive processing with aging, probabilistic reduction effects remain stable across adulthood.
  • Moisik, S. R., & Dediu, D. (2015). Anatomical biasing and clicks: Preliminary biomechanical modelling. In H. Little (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015) Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 8-13). Glasgow: ICPhS.

    Abstract

    It has been observed by several researchers that the Khoisan palate tends to lack a prominent alveolar ridge. A preliminary biomechanical model of click production was created to examine if these sounds might be subject to an anatomical bias associated with alveolar ridge size. Results suggest the bias is plausible, taking the form of decreased articulatory effort and improved volume change characteristics, however, further modelling and experimental research is required to solidify the claim.
  • Monaghan, P., Mattock, K., Davies, R., & Smith, A. C. (2015). Gavagai is as gavagai does: Learning nouns and verbs from cross-situational statistics. Cognitive Science, 39, 1099-1112. doi:10.1111/cogs.12186.

    Abstract

    Learning to map words onto their referents is difficult, because there are multiple possibilities for forming these mappings. Cross-situational learning studies have shown that word-object mappings can be learned across multiple situations, as can verbs when presented in a syntactic context. However, these previous studies have presented either nouns or verbs in ambiguous contexts and thus bypass much of the complexity of multiple grammatical categories in speech. We show that noun word-learning in adults is robust when objects are moving, and that verbs can also be learned from similar scenes without additional syntactic information. Furthermore, we show that both nouns and verbs can be acquired simultaneously, thus resolving category-level as well as individual word level ambiguity. However, nouns were learned more accurately than verbs, and we discuss this in light of previous studies investigating the noun advantage in word learning.
  • Morano, L., Ernestus, M., & Ten Bosch, L. (2015). Schwa reduction in low-proficiency L2 speakers: Learning and generalization. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper investigated the learnability and generalizability of French schwa alternation by Dutch low-proficiency second language learners. We trained 40 participants on 24 new schwa words by exposing them equally often to the reduced and full forms of these words. We then assessed participants' accuracy and reaction times to these newly learnt words as well as 24 previously encountered schwa words with an auditory lexical decision task. Our results show learning of the new words in both forms. This suggests that lack of exposure is probably the main cause of learners' difficulties with reduced forms. Nevertheless, the full forms were slightly better recognized than the reduced ones, possibly due to phonetic and phonological properties of the reduced forms. We also observed no generalization to previously encountered words, suggesting that our participants stored both of the learnt word forms and did not create a rule that applies to all schwa words.
  • Moreno, I., De Vega, M., León, I., Bastiaansen, M. C. M., Lewis, A. G., & Magyari, L. (2015). Brain dynamics in the comprehension of action-related language. A time-frequency analysis of mu rhythms. Neuroimage, 109, 50-62. doi:10.1016/j.neuroimage.2015.01.018.

    Abstract

    EEG mu rhythms (8-13Hz) recorded at fronto-central electrodes are generally considered as markers of motor cortical activity in humans, because they are modulated when participants perform an action, when they observe another’s action or even when they imagine performing an action. In this study, we analyzed the time-frequency (TF) modulation of mu rhythms while participants read action language (“You will cut the strawberry cake”), abstract language (“You will doubt the patient´s argument”), and perceptive language (“You will notice the bright day”). The results indicated that mu suppression at fronto-central sites is associated with action language rather than with abstract or perceptive language. Also, the largest difference between conditions occurred quite late in the sentence, while reading the first noun, (contrast Action vs. Abstract), or the second noun following the action verb (contrast Action vs. Perceptive). This suggests that motor activation is associated with the integration of words across the sentence beyond the lexical processing of the action verb. Source reconstruction localized mu suppression associated with action sentences in premotor cortex (BA 6). The present study suggests (1) that the understanding of action language activates motor networks in the human brain, and (2) that this activation occurs online based on semantic integration across multiple words in the sentence.
  • Mulder, K., Dijkstra, T., & Baayen, R. H. (2015). Cross-language activation of morphological relatives in cognates: The role of orthographic overlap and task-related processing. Frontiers in Human Neuroscience, 9: 16. doi:10.3389/fnhum.2015.00016.

    Abstract

    We considered the role of orthography and task-related processing mechanisms in the activation of morphologically related complex words during bilingual word processing. So far, it has only been shown that such morphologically related words (i.e., morphological family members) are activated through the semantic and morphological overlap they share with the target word. In this study, we investigated family size effects in Dutch-English identical cognates (e.g., tent in both languages), non-identical cognates (e.g., pil and pill, in English and Dutch, respectively), and non-cognates (e.g., chicken in English). Because of their cross-linguistic overlap in orthography, reading a cognate can result in activation of family members both languages. Cognates are therefore well-suited for studying mechanisms underlying bilingual activation of morphologically complex words. We investigated family size effects in an English lexical decision task and a Dutch-English language decision task, both performed by Dutch-English bilinguals. English lexical decision showed a facilitatory effect of English and Dutch family size on the processing of English-Dutch cognates relative to English non-cognates. These family size effects were not dependent on cognate type. In contrast, for language decision, in which a bilingual context is created, Dutch and English family size effects were inhibitory. Here, the combined family size of both languages turned out to better predict reaction time than the separate family size in Dutch or English. Moreover, the combined family size interacted with cognate type: the response to identical cognates was slowed by morphological family members in both languages. We conclude that (1) family size effects are sensitive to the task performed on the lexical items, and (2) depend on both semantic and formal aspects of bilingual word processing. We discuss various mechanisms that can explain the observed family size effects in a spreading activation framework.

Share this page