Publications

Displaying 301 - 400 of 826
  • Janse, E., & Ernestus, M. (2009). Recognition of reduced speech and use of phonetic context in listeners with age-related hearing impairment [Abstract]. Journal of the Acoustical Society of America, 125(4), 2535.
  • Janse, E. (2003). Word perception in natural-fast and artificially time-compressed speech. In M. SolÉ, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of the Phonetic Sciences (pp. 3001-3004).
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2018). Agent model reveals the influence of vocal tract anatomy on speech during ontogeny and glossogeny. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 171-174). Toruń, Poland: NCU Press. doi:10.12775/3991-1.042.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2018). Modelling human hard palate shape with Bézier curves. PLoS One, 13(2): e0191557. doi:10.1371/journal.pone.0191557.

    Abstract

    People vary at most levels, from the molecular to the cognitive, and the shape of the hard palate (the bony roof of the mouth) is no exception. The patterns of variation in the hard palate are important for the forensic sciences and (palaeo)anthropology, and might also play a role in speech production, both in pathological cases and normal variation. Here we describe a method based on Bézier curves, whose main aim is to generate possible shapes of the hard palate in humans for use in computer simulations of speech production and language evolution. Moreover, our method can also capture existing patterns of variation using few and easy-to-interpret parameters, and fits actual data obtained from MRI traces very well with as little as two or three free parameters. When compared to the widely-used Principal Component Analysis (PCA), our method fits actual data slightly worse for the same number of degrees of freedom. However, it is much better at generating new shapes without requiring a calibration sample, its parameters have clearer interpretations, and their ranges are grounded in geometrical considerations. © 2018 Janssen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
  • Järvikivi, J., Pyykkönen, P., & Niemi, J. (2009). Exploiting degrees of inflectional ambiguity: Stem form and the time course of morphological processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(1), 221-237. doi:10.1037/a0014355.

    Abstract

    The authors compared sublexical and supralexical approaches to morphological processing with unambiguous and ambiguous inflected words and words with ambiguous stems in 3 masked and unmasked priming experiments in Finnish. Experiment 1 showed equal facilitation for all prime types with a short 60-ms stimulus onset asynchrony (SOA) but significant facilitation for unambiguous words only with a long 300-ms SOA. Experiment 2 showed that all potential readings of ambiguous inflections were activated under a short SOA. Whereas the prime-target form overlap did not affect the results under a short SOA, it significantly modulated the results with a long SOA. Experiment 3 confirmed that the results from masked priming were modulated by the morphological structure of the words but not by the prime-target form overlap alone. The results support approaches in which early prelexical morphological processing is driven by morph-based segmentation and form is used to cue selection between 2 candidates only during later processing.

    Files private

    Request files
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jesse, A., & Janse, E. (2009). Seeing a speaker's face helps stream segregation for younger and elderly adults [Abstract]. Journal of the Acoustical Society of America, 125(4), 2361.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Jesse, A., & Janse, E. (2009). Visual speech information aids elderly adults in stream segregation. In B.-J. Theobald, & R. Harvey (Eds.), Proceedings of the International Conference on Auditory-Visual Speech Processing 2009 (pp. 22-27). Norwich, UK: School of Computing Sciences, University of East Anglia.

    Abstract

    Listening to a speaker while hearing another speaker talks is a challenging task for elderly listeners. We show that elderly listeners over the age of 65 with various degrees of age-related hearing loss benefit in this situation from also seeing the speaker they intend to listen to. In a phoneme monitoring task, listeners monitored the speech of a target speaker for either the phoneme /p/ or /k/ while simultaneously hearing a competing speaker. Critically, on some trials, the target speaker was also visible. Elderly listeners benefited in their response times and accuracy levels from seeing the target speaker when monitoring for the less visible /k/, but more so when monitoring for the highly visible /p/. Visual speech therefore aids elderly listeners not only by providing segmental information about the target phoneme, but also by providing more global information that allows for better performance in this adverse listening situation.
  • Johnson, E. K. (2003). Speaker intent influences infants' segmentation of potentially ambiguous utterances. In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 1995-1998). Adelaide: Causal Productions.
  • Johnson, E. K., & Seidl, A. (2009). At 11 months, prosody still outranks statistics. Developmental Science, 12, 131-141. doi:10.1111/j.1467-7687.2008.00740.x.

    Abstract

    English-learning 7.5-month-olds are heavily biased to perceive stressed syllables as word onsets. By 11 months, however, infants begin segmenting non-initially stressed words from speech.Using the same artificial language methodology as Johnson and Jusczyk (2001), we explored the possibility that the emergence of this ability is linked to a decreased reliance on prosodic cues to word boundaries accompanied by an increased reliance on syllable distribution cues. In a baseline study, where only statistical cues to word boundaries were present, infants exhibited a familiarity preference for statistical words. When conflicting stress cues were added to the speech stream, infants exhibited a familiarity preference for stress as opposed to statistical words. This was interpreted as evidence that 11-month-olds weight stress cues to word boundaries more heavily than statistical cues. Experiment 2 further investigated these results with a language containing convergent cues to word boundaries. The results of Experiment 2 were not conclusive. A third experiment using new stimuli and a different experimental design supported the conclusion that 11-month-olds rely more heavily on prosodic than statistical cues to word boundaries. We conclude that the emergence of the ability to segment non-initially stressed words from speech is not likely to be tied to an increased reliance on syllable distribution cues relative to stress cues, but instead may emerge due to an increased reliance on and integration of a broad array of segmentation cues.
  • Johnson, E. K., Bruggeman, L., & Cutler, A. (2018). Abstraction and the (misnamed) language familiarity effect. Cognitive Science, 42, 633-645. doi:10.1111/cogs.12520.

    Abstract

    Talkers are recognized more accurately if they are speaking the listeners’ native language rather than an unfamiliar language. This “language familiarity effect” has been shown not to depend upon comprehension and must instead involve language sound patterns. We further examine the level of sound-pattern processing involved, by comparing talker recognition in foreign languages versus two varieties of English, by (a) English speakers of one variety, (b) English speakers of the other variety, and (c) non-native listeners (more familiar with one of the varieties). All listener groups performed better with native than foreign speech, but no effect of language variety appeared: Native listeners discriminated talkers equally well in each, with the native variety never outdoing the other variety, and non-native listeners discriminated talkers equally poorly in each, irrespective of the variety's familiarity. The results suggest that this talker recognition effect rests not on simple familiarity, but on an abstract level of phonological processing
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2000). The development of word recognition: The use of the possible-word constraint by 12-month-olds. In L. Gleitman, & A. Joshi (Eds.), Proceedings of CogSci 2000 (pp. 1034). London: Erlbaum.
  • Jordan, F., Gray, R., Greenhill, S., & Mace, R. (2009). Matrilocal residence is ancestral in Austronesian societies. Proceedings of the Royal Society of London Series B-Biological Sciences, 276(1664), 1957-1964. doi:10.1098/rspb.2009.0088.

    Abstract

    The nature of social life in human prehistory is elusive, yet knowing how kinship systems evolve is critical for understanding population history and cultural diversity. Post-marital residence rules specify sex-specific dispersal and kin association, influencing the pattern of genetic markers across populations. Cultural phylogenetics allows us to practise 'virtual archaeology' on these aspects of social life that leave no trace in the archaeological record. Here we show that early Austronesian societies practised matrilocal post-marital residence. Using a Markov-chain Monte Carlo comparative method implemented in a Bayesian phylogenetic framework, we estimated the type of residence at each ancestral node in a sample of Austronesian language trees spanning 135 Pacific societies. Matrilocal residence has been hypothesized for proto-Oceanic society (ca 3500 BP), but we find strong evidence that matrilocality was predominant in earlier Austronesian societies ca 5000-4500 BP, at the root of the language family and its early branches. Our results illuminate the divergent patterns of mtDNA and Y-chromosome markers seen in the Pacific. The analysis of present-day cross-cultural data in this way allows us to directly address cultural evolutionary and life-history processes in prehistory.
  • Kalashnikova, M., Escudero, P., & Kidd, E. (2018). The development of fast-mapping and novel word retention strategies in monolingual and bilingual infants. Developmental Science, 21(6): e12674. doi:10.1111/desc.12674.

    Abstract

    The mutual exclusivity (ME) assumption is proposed to facilitate early word learning by guiding infants to map novel words to novel referents. This study assessed the emergence and use of ME to both disambiguate and retain the meanings of novel words across development in 18‐month‐old monolingual and bilingual children (Experiment 1; N = 58), and in a sub‐group of these children again at 24 months of age (Experiment 2: N = 32). Both monolinguals and bilinguals employed ME to select the referent of a novel label to a similar extent at 18 and 24 months. At 18 months, there were also no differences in novel word retention between the two language‐background groups. However, at 24 months, only monolinguals showed the ability to retain these label–object mappings. These findings indicate that the development of the ME assumption as a reliable word‐learning strategy is shaped by children's individual language exposure and experience with language use.

    Files private

    Request files
  • Kanero, J., Geçkin, V., Oranç, C., Mamus, E., Küntay, A. C., & Göksun, T. (2018). Social robots for early language learning: Current evidence and future directions. Child Development Perspectives, 12(3), 146-151. doi:10.1111/cdep.12277.

    Abstract

    In this article, we review research on child–robot interaction (CRI) to discuss how social robots can be used to scaffold language learning in young children. First we provide reasons why robots can be useful for teaching first and second languages to children. Then we review studies on CRI that used robots to help children learn vocabulary and produce language. The studies vary in first and second languages and demographics of the learners (typically developing children and children with hearing and communication impairments). We conclude that, although social robots are useful for teaching language to children, evidence suggests that robots are not as effective as human teachers. However, this conclusion is not definitive because robots that tutor students in language have not been evaluated rigorously and technology is advancing rapidly. We suggest that CRI offers an opportunity for research and list possible directions for that work.
  • Kanero, J., Franko, I., Oranç, C., Uluşahin, O., Koskulu, S., Adigüzel, Z., Küntay, A. C., & Göksun, T. (2018). Who can benefit from robots? Effects of individual differences in robot-assisted language learning. In Proceedings of the 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) (pp. 212-217). Piscataway, NJ, USA: IEEE.

    Abstract

    It has been suggested that some individuals may benefit more from social robots than do others. Using second
    language (L2) as an example, the present study examined how individual differences in attitudes toward robots and personality
    traits may be related to learning outcomes. Preliminary results with 24 Turkish-speaking adults suggest that negative attitudes
    toward robots, more specifically thoughts and anxiety about the negative social impact that robots may have on the society,
    predicted how well adults learned L2 words from a social robot. The possible implications of the findings as well as future directions are also discussed
  • Kempen, G., & Harbusch, K. (2018). A competitive mechanism selecting verb-second versus verb-final word order in causative and argumentative clauses of spoken Dutch: A corpus-linguistic study. Language Sciences, 69, 30-42. doi:10.1016/j.langsci.2018.05.005.

    Abstract

    In Dutch and German, the canonical order of subject, object(s) and finite verb is ‘verb-second’ (V2) in main but ‘verb-final’ (VF) in subordinate clauses. This occasionally leads to the production of noncanonical word orders. Familiar examples are causative and argumentative clauses introduced by a subordinating conjunction (Du. omdat, Ger. weil ‘because’): the omdat/weil-V2 phenomenon. Such clauses may also be introduced by coordinating conjunctions (Du. want, Ger. denn), which license V2 exclusively. However, want/denn-VF structures are unknown. We present the results of a corpus study on the incidence of omdat-V2 in spoken Dutch, and compare them to published data on weil-V2 in spoken German. Basic findings: omdat-V2 is much less frequent than weil-V2 (ratio almost 1:8); and the frequency relations between coordinating and subordinating conjunctions are opposite (want >> omdat; denn << weil). We propose that conjunction selection and V2/VF selection proceed partly independently, and sometimes miscommunicate—e.g. yielding omdat/weil paired with V2. Want/denn-VF pairs do not occur because want/denn clauses are planned as autonomous sentences, which take V2 by default. We sketch a simple feedforward neural network with two layers of nodes (representing conjunctions and word orders, respectively) that can simulate the observed data pattern through inhibition-based competition of the alternative choices within the node layers.
  • Kempen, G., & Harbusch, K. (2003). A corpus study into word order variation in German subordinate clauses: Animacy affects linearization independently of function assignment. In Proceedings of AMLaP 2003 (pp. 153-154). Glasgow: Glasgow University.
  • Kempen, G. (2009). Clausal coordination and coordinative ellipsis in a model of the speaker. Linguistics, 47(3), 653-696. doi:10.1515/LING.2009.022.

    Abstract

    This article presents a psycholinguistically inspired approach to the syntax of clause-level coordination and coordinate ellipsis. It departs from the assumption that coordinations are structurally similar to so-called appropriateness repairs — an important type of self-repairs in spontaneous speech. Coordinate structures and appropriateness repairs can both be viewed as “update” constructions. Updating is defined as a special sentence production mode that efficiently revises or augments existing sentential structure in response to modifications in the speaker's communicative intention. This perspective is shown to offer an empirically satisfactory and theoretically parsimonious account of two prominent types of coordinate ellipsis, in particular “forward conjunction reduction” (FCR) and “gapping” (including “long-distance gapping” and “subgapping”). They are analyzed as different manifestations of “incremental updating” — efficient updating of only part of the existing sentential structure. Based on empirical data from Dutch and German, novel treatments are proposed for both types of clausal coordinate ellipsis. The coordination-as-updating perspective appears to explain some general properties of coordinate structure: the existence of the well-known “coordinate structure constraint”, and the attractiveness of three-dimensional representations of coordination. Moreover, two other forms of coordinate ellipsis — SGF (“subject gap in finite clauses with fronted verb”), and “backward conjunction reduction” (BCR) (also known as “right node raising” or RNR) — are shown to be incompatible with the notion of incremental updating. Alternative theoretical interpretations of these phenomena are proposed. The four types of clausal coordinate ellipsis — SGF, gapping, FCR and BCR — are argued to originate in four different stages of sentence production: Intending (i.e., preparing the communicative intention), conceptualization, grammatical encoding, and phonological encoding, respectively.
  • Kempen, G., Schotel, H., & Hoenkamp, E. (1982). Analyse-door-synthese van Nederlandse zinnen [Abstract]. De Psycholoog, 17, 509.
  • Kempen, G. (1971). [Review of the book General Psychology by N. Dember and J.J. Jenkins]. Nijmeegs Tijdschrift voor Psychologie, 19, 132-133.
  • Kempen, G. (2000). Could grammatical encoding and grammatical decoding be subserved by the same processing module? Behavioral and Brain Sciences, 23, 38-39.
  • Kempen, G., & Harbusch, K. (2003). An artificial opposition between grammaticality and frequency: Comment on Bornkessel, Schlesewsky & Friederici (2002). Cognition, 90(2), 205-210 [Rectification on p. 215]. doi:10.1016/S0010-0277(03)00145-8.

    Abstract

    In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim “show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing” (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.
  • Kempen, G., & Hoenkamp, E. (1982). Incremental sentence generation: Implications for the structure of a syntactic processor. In J. Horecký (Ed.), COLING 82. Proceedings of the Ninth International Conference on Computational Linguistics, Prague, July 5-10, 1982 (pp. 151-156). Amsterdam: North-Holland.

    Abstract

    Human speakers often produce sentences incrementally. They can start speaking having in mind only a fragmentary idea of what they want to say, and while saying this they refine the contents underlying subsequent parts of the utterance. This capability imposes a number of constraints on the design of a syntactic processor. This paper explores these constraints and evaluates some recent computational sentence generators from the perspective of incremental production.
  • Kempen, G. (1965). Leermachine en talenpracticum: Inleiding en literatuuroverzicht. Tijdschrift voor opvoedkunde, 11, 1-31.
  • Kempen, G. (1971). Het onthouden van eenvoudige zinnen met zijn en hebben als werkwoorden: Een experiment met steekwoordreaktietijden. Nijmeegs Tijdschrift voor Psychologie, 19, 262-274.
  • Kempen, G. (1971). Opslag van woordbetekenissen in het semantisch geheugen. Nijmeegs Tijdschrift voor Psychologie, 19, 36-50.
  • Kempen, G. (1985). Psychologie 2000. Toegepaste psychologie in de informatiemaatschappij. Computers in de psychologie, 13-21.
  • Kempen, G., & Huijbers, P. (1983). The lexicalization process in sentence production and naming: Indirect election of words. Cognition, 14(2), 185-209. doi:10.1016/0010-0277(83)90029-X.

    Abstract

    A series of experiments is reported in which subjects describe simple visual scenes by means of both sentential and non-sentential responses. The data support the following statements about the lexicalization (word finding) process. (1) Words used by speakers in overt naming or sentence production responses are selected by a sequence of two lexical retrieval processes, the first yielding abstract pre-phonological items (Ll -items), the second one adding their phonological shapes (L2-items). (2) The selection of several Ll-items for a multi-word utterance can take place simultaneously. (3) A monitoring process is watching the output of Ll-lexicalization to check if it is in keeping with prevailing constraints upon utterance format. (4) Retrieval of the L2-item which corresponds with a given LI-item waits until the Ld-item has been checked by the monitor, and all other Ll-items needed for the utterance under construction have become available. A coherent picture of the lexicalization process begins to emerge when these characteristics are brought together with other empirical results in the area of naming and sentence production, e.g., picture naming reaction times (Seymour, 1979), speech errors (Garrett, 1980), and word order preferences (Bock, 1982).
  • Kempen, G. (1983). Wat betekent taalvaardigheid voor informatiesystemen? TNO project: Maandblad voor toegepaste wetenschappen, 11, 401-403.
  • Kemps-Snijders, M., Windhouwer, M., Wittenburg, P., & Wright, S. E. (2009). ISOcat: Remodeling metadata for language resources. International Journal of Metadata, Semantics and Ontologies (IJMSO), 4(4), 261-276. doi:10.1504/IJMSO.2009.029230.

    Abstract

    The Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, is creating a state-of-the-art web environment for the ISO TC 37 (terminology and other language and content resources) metadata registry. This Data Category Registry (DCR) is called ISOcat and encompasses data categories for a broad range of language resources. Under the governance of the DCR Board, ISOcat provides an open work space for creating data category specifications, defining Data Category Selections (DCSs) (domain-specific groups of data categories), and standardising selected data categories and DCSs. Designers visualise future interactivity among the DCR, reference registries and ontological knowledge spaces
  • Khetarpal, N., Majid, A., & Regier, T. (2009). Spatial terms reflect near-optimal spatial categories. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 2396-2401). Austin, TX: Cognitive Science Society.

    Abstract

    Spatial terms in the world’s languages appear to reflect both universal conceptual tendencies and linguistic convention. A similarly mixed picture in the case of color naming has been accounted for in terms of near-optimal partitions of color space. Here, we demonstrate that this account generalizes to spatial terms. We show that the spatial terms of 9 diverse languages near-optimally partition a similarity space of spatial meanings, just as color terms near-optimally partition color space. This account accommodates both universal tendencies and cross-language differences in spatial category extension, and identifies general structuring principles that appear to operate across different semantic domains.
  • Kidd, E., & Holler, J. (2009). Children’s use of gesture to resolve lexical ambiguity. Developmental Science, 12, 903-913.
  • Kidd, E. (2009). [Review of the book Constructions at work: The nature of generalization in language by Adele E. Goldberg]. Cognitive Linguistics, 20(2), 425-434. doi:10.1515/COGL.2009.020.
  • Kidd, E. (2009). [Review of the book Developmental psycholinguistics; On-line methods in children's language processing ed. by Irina A. Sekerina, Eva M. Hernandez and Harald Clahsen]. Journal of Child Language, 36(2), 471-475. doi:10.1017/S030500090800901X.
  • Kidd, E., Junge, C., Spokes, T., Morrison, L., & Cutler, A. (2018). Individual differences in infant speech segmentation: Achieving the lexical shift. Infancy, 23(6), 770-794. doi:10.1111/infa.12256.

    Abstract

    We report a large‐scale electrophysiological study of infant speech segmentation, in which over 100 English‐acquiring 9‐month‐olds were exposed to unfamiliar bisyllabic words embedded in sentences (e.g., He saw a wild eagle up there), after which their brain responses to either the just‐familiarized word (eagle) or a control word (coral) were recorded. When initial exposure occurs in continuous speech, as here, past studies have reported that even somewhat older infants do not reliably recognize target words, but that successful segmentation varies across children. Here, we both confirm and further uncover the nature of this variation. The segmentation response systematically varied across individuals and was related to their vocabulary development. About one‐third of the group showed a left‐frontally located relative negativity in response to familiar versus control targets, which has previously been described as a mature response. Another third showed a similarly located positive‐going reaction (a previously described immature response), and the remaining third formed an intermediate grouping that was primarily characterized by an initial response delay. A fine‐grained group‐level analysis suggested that a developmental shift to a lexical mode of processing occurs toward the end of the first year, with variation across individual infants in the exact timing of this shift.

    Additional information

    supporting information
  • Kidd, E., Donnelly, S., & Christiansen, M. H. (2018). Individual differences in language acquisition and processing. Trends in Cognitive Sciences, 22(2), 154-169. doi:10.1016/j.tics.2017.11.006.

    Abstract

    Humans differ in innumerable ways, with considerable variation observable at every level of description, from the molecular to the social. Traditionally, linguistic and psycholinguistic theory has downplayed the possibility of meaningful differences in language across individuals. However, it is becoming increasingly evident that there is
    significant variation among speakers at any age as well as across the lifespan. In this paper, we review recent research in psycholinguistics, and argue that a focus on individual differences provides a crucial source of evidence that bears strongly upon core issues in theories of the acquisition and processing of language; specifically, the role of experience in language acquisition, processing, and attainment, and the architecture of the language faculty.
  • Kidd, E. (2003). Relative clause comprehension revisited: Commentary on Eisenberg (2002). Journal of Child Language, 30(3), 671-679. doi:10.1017/S0305000903005683.

    Abstract

    Eisenberg (2002) presents data from an experiment investigating three- and four-year-old children's comprehension of restrictive relative clauses (RC). From the results she argues, contrary to Hamburger & Crain (1982), that children do not have discourse knowledge of the felicity conditions of RCs before acquiring the syntax of relativization. This note evaluates this conclusion on the basis of the methodology used, and proposes that an account of syntactic development needs to be sensitive to the real-time processing requirements acquisition places on the learner.
  • Kita, S., & Ozyurek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48(1), 16-32. doi:10.1016/S0749-596X(02)00505-3.

    Abstract

    Gestures that spontaneously accompany speech convey information coordinated with the concurrent speech. There has been considerable theoretical disagreement about the process by which this informational coordination is achieved. Some theories predict that the information encoded in gesture is not influenced by how information is verbally expressed. However, others predict that gestures encode only what is encoded in speech. This paper investigates this issue by comparing informational coordination between speech and gesture across different languages. Narratives in Turkish, Japanese, and English were elicited using an animated cartoon as the stimulus. It was found that gestures used to express the same motion events were influenced simultaneously by (1) how features of motion events were expressed in each language, and (2) spatial information in the stimulus that was never verbalized. From this, it is concluded that gestures are generated from spatio-motoric processes that interact on-line with the speech production process. Through the interaction, spatio-motoric information to be expressed is packaged into chunks that are verbalizable within a processing unit for speech formulation. In addition, we propose a model of speech and gesture production as one of a class of frameworks that are compatible with the data.
  • Kiyama, S., Verdonschot, R. G., Xiong, K., & Tamaoka, K. (2018). Individual mentalizing ability boosts flexibility toward a linguistic marker of social distance: An ERP investigation. Journal of Neurolinguistics, 47, 1-15. doi:10.1016/j.jneuroling.2018.01.005.

    Abstract

    Sentence-final particles (SFPs) as bound morphemes in Japanese have no obvious effect on the truth conditions of a sentence. However, they encompass a diverse range of usages, from typical to atypical, according to the context and the interpersonal relationships in the specific situation. The most frequent particle,-ne, is typically used after addressee-oriented propositions for information sharing, while another frequent particle,-yo, is typically used after addresser-oriented propositions to elicit a sense of strength. This study sheds light on individual differences among native speakers in flexibly understanding such linguistic markers based on their mentalizing ability (i.e., the ability to infer the mental states of others). Two experiments employing electroencephalography (EEG) consistently showed enhanced early posterior negativities (EPN) for atypical SFP usage compared to typical usage, especially when understanding-ne compared to -yo, in both an SFP appropriateness judgment task and a content comprehension task. Importantly, the amplitude of the EPN for atypical usages of-ne was significantly higher in participants with lower mentalizing ability than in those with a higher mentalizing ability. This effect plausibly reflects low-ability mentalizers' stronger sense of strangeness toward atypical-ne usage. While high-ability mentalizers may aptly perceive others' attitudes via their various usages of-ne, low-ability mentalizers seem to adopt a more stereotypical understanding. These results attest to the greater degree of difficulty low-ability mentalizers have in establishing a smooth regulation of interpersonal distance during social encounters.

    Additional information

    stimuli dialog sets
  • Klein, W. (2003). Wozu braucht man eigentlich Flexionsmorphologie? Zeitschrift für Literaturwissenschaft und Linguistik, 131, 23-54.
  • Klein, W. (2000). Changing concepts of the nature-nurture debate. In R. Hide, J. Mittelstrass, & W. Singer (Eds.), Changing concepts of nature at the turn of the millenium: Proceedings plenary session of the Pontifical academy of sciences, 26-29 October 1998 (pp. 289-299). Vatican City: Pontificia Academia Scientiarum.
  • Klein, W., & Rieck, B.-O. (1982). Der Erwerb der Personalpronomina im ungesteuerten Spracherwerb. Zeitschrift für Literaturwissenschaft und Linguistik, 45, 35-71.
  • Klein, W. (1982). Einige Bemerkungen zur Frageintonation. Deutsche Sprache, 4, 289-310.

    Abstract

    In the first, critical part of this study, a small sample of simple German sentences with their empirically determined pitch contours is used to demonstrate the incorrectness of numerous currently hold views of German sentence intonation. In the second, more constructive part, several interrogative sentence types are analysed and an attempt is made to show that intonation, besides other functions, indicates the permantently changing 'thematic score' in on-going discourse as well as certain validity claims.
  • Klein, W. (1985). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 15(59), 7-8.
  • Klein, W. (1982). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 12, 7-8.
  • Klein, W. (2000). An analysis of the German perfekt. Language, 76, 358-382.

    Abstract

    The German Perfekt has two quite different temporal readings, as illustrated by the two possible continuations of the sentence Peter hat gearbeitet in i, ii, respectively: (i) Peter hat gearbeitet und ist müde. Peter has worked and is tired. (ii) Peter hat gearbeitet und wollte nicht gestört werden. Peter has worked and wanted not to be disturbed. The first reading essentially corresponds to the English present perfect; the second can take a temporal adverbial with past time reference ('yesterday at five', 'when the phone rang', and so on), and an English translation would require a past tense ('Peter worked/was working'). This article shows that the Perfekt has a uniform temporal meaning that results systematically from the interaction of its three components-finiteness marking, auxiliary and past participle-and that the two readings are the consequence of a structural ambiguity. This analysis also predicts the properties of other participle constructions, in particular the passive in German.
  • Klein, W., Li, P., & Hendriks, H. (2000). Aspect and assertion in Mandarin Chinese. Natural Language & Linguistic Theory, 18, 723-770. doi:10.1023/A:1006411825993.

    Abstract

    Chinese has a number of particles such as le, guo, zai and zhe that add a particular aspectual value to the verb to which they are attached. There have been many characterisations of this value in the literature. In this paper, we review several existing influential accounts of these particles, including those in Li and Thompson (1981), Smith (1991), and Mangione and Li (1993). We argue that all these characterisations are intuitively plausible, but none of them is precise.We propose that these particles serve to mark which part of the sentence''s descriptive content is asserted, and that their aspectual value is a consequence of this function. We provide a simple and precise definition of the meanings of le, guo, zai and zhe in terms of the relationship between topic time and time of situation, and show the consequences of their interaction with different verb expressions within thisnew framework of interpretation.
  • Klein, W. (1971). Eine kommentierte Bibliographie zur Computerlinguistik. Linguistische Berichte, (11), 101-134.
  • Klein, W., & Franceschini, R. (Eds.). (2003). Einfache Sprache [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 131.
  • Klein, W. (2000). Fatale Traditionen. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (120), 11-40.
  • Klein, W. (1985). Gesprochene Sprache - geschriebene Sprache. Zeitschrift für Literaturwissenschaft und Linguistik, 59, 9-35.
  • Klein, W. (Ed.). (1983). Intonation [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (49).
  • Klein, W. (1982). Pronoms personnels et formes d'acquisition. Encrages, 8/9, 42-46.
  • Klein, W. (Ed.). (2000). Sprache des Rechts [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (118).
  • Klein, W., & Berliner Arbeitsgruppe (2000). Sprache des Rechts: Vermitteln, Verstehen, Verwechseln. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 7-33.
  • Klein, W. (Ed.). (1985). Schriftlichkeit [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (59).
  • Klein, W. (2000). Was uns die Sprache des Rechts über die Sprache sagt. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 115-149.
  • Klein, W., & Dimroth, C. (Eds.). (2009). Worauf kann sich der Sprachunterricht stützen? [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 153.
  • Klein, W. (Ed.). (1982). Zweitspracherwerb [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (45).
  • Klein, W. (1983). Vom Glück des Mißverstehens und der Trostlosigkeit der idealen Kommunikationsgemeinschaft. Zeitschrift für Literaturwissenschaft und Linguistik, 50, 128-140.
  • Kochari, A. R., & Ostarek, M. (2018). Introducing a replication-first rule for PhD projects (commmentary on Zwaan et al., ‘Making replication mainstream’). Behavioral and Brain Sciences, 41: e138. doi:10.1017/S0140525X18000730.

    Abstract

    Zwaan et al. mention that young researchers should conduct replications as a
    small part of their portfolio. We extend this proposal and suggest that conducting and
    reporting replications should become an integral part of PhD projects and be taken into
    account in their assessment. We discuss how this would help not only scientific
    advancement, but also PhD candidates’ careers.
  • Koenig, A., Ringersma, J., & Trilsbeek, P. (2009). The Language Archiving Technology domain. In Z. Vetulani (Ed.), Human Language Technologies as a Challenge for Computer Science and Linguistics (pp. 295-299).

    Abstract

    The Max Planck Institute for Psycholinguistics (MPI) manages an archive of linguistic research data with a current size of almost 20 Terabytes. Apart from in-house researchers other projects also store their data in the archive, most notably the Documentation of Endangered Languages (DoBeS) projects. The archive is available online and can be accessed by anybody with Internet access. To be able to manage this large amount of data the MPI's technical group has developed a software suite called Language Archiving Technology (LAT) that on the one hand helps researchers and archive managers to manage the data and on the other hand helps users in enriching their primary data with additional layers. All the MPI software is Java-based and developed according to open source principles (GNU, 2007). All three major operating systems (Windows, Linux, MacOS) are supported and the software works similarly on all of them. As the archive is online, many of the tools, especially the ones for accessing the data, are browser based. Some of these browser-based tools make use of Adobe Flex to create nice-looking GUIs. The LAT suite is a complete set of management and enrichment tools, and given the interaction between the tools the result is a complete LAT software domain. Over the last 10 years, this domain has proven its functionality and use, and is being deployed to servers in other institutions. This deployment is an important step in getting the archived resources back to the members of the speech communities whose languages are documented. In the paper we give an overview of the tools of the LAT suite and we describe their functionality and role in the integrated process of archiving, management and enrichment of linguistic data.
  • Kolipakam, V., Jordan, F., Dunn, M., Greenhill, S. J., Bouckaert, R., Gray, R. D., & Verkerk, A. (2018). A Bayesian phylogenetic study of the Dravidian language family. Royal Society Open Science, 5: 171504. doi:10.1098/rsos.171504.

    Abstract

    The Dravidian language family consists of about 80 varieties (Hammarström H. 2016 Glottolog 2.7) spoken by 220 million people across southern and central India and surrounding countries (Steever SB. 1998 In The Dravidian languages (ed. SB Steever), pp. 1–39: 1). Neither the geographical origin of the Dravidian language homeland nor its exact dispersal through time are known. The history of these languages is crucial for understanding prehistory in Eurasia, because despite their current restricted range, these languages played a significant role in influencing other language groups including Indo-Aryan (Indo-European) and Munda (Austroasiatic) speakers. Here, we report the results of a Bayesian phylogenetic analysis of cognate-coded lexical data, elicited first hand from native speakers, to investigate the subgrouping of the Dravidian language family, and provide dates for the major points of diversification. Our results indicate that the Dravidian language family is approximately 4500 years old, a finding that corresponds well with earlier linguistic and archaeological studies. The main branches of the Dravidian language family (North, Central, South I, South II) are recovered, although the placement of languages within these main branches diverges from previous classifications. We find considerable uncertainty with regard to the relationships between the main branches.
  • Kong, X., Mathias, S. R., Guadalupe, T., ENIGMA Laterality Working Group, Glahn, D. C., Franke, B., Crivello, F., Tzourio-Mazoyer, N., Fisher, S. E., Thompson, P. M., & Francks, C. (2018). Mapping Cortical Brain Asymmetry in 17,141 Healthy Individuals Worldwide via the ENIGMA Consortium. Proceedings of the National Academy of Sciences of the United States of America, 115(22), E5154-E5163. doi:10.1073/pnas.1718418115.

    Abstract

    Hemispheric asymmetry is a cardinal feature of human brain organization. Altered brain asymmetry has also been linked to some cognitive and neuropsychiatric disorders. Here the ENIGMA consortium presents the largest ever analysis of cerebral cortical asymmetry and its variability across individuals. Cortical thickness and surface area were assessed in MRI scans of 17,141 healthy individuals from 99 datasets worldwide. Results revealed widespread asymmetries at both hemispheric and regional levels, with a generally thicker cortex but smaller surface area in the left hemisphere relative to the right. Regionally, asymmetries of cortical thickness and/or surface area were found in the inferior frontal gyrus, transverse temporal gyrus, parahippocampal gyrus, and entorhinal cortex. These regions are involved in lateralized functions, including language and visuospatial processing. In addition to population-level asymmetries, variability in brain asymmetry was related to sex, age, and intracranial volume. Interestingly, we did not find significant associations between asymmetries and handedness. Finally, with two independent pedigree datasets (N = 1,443 and 1,113, respectively), we found several asymmetries showing significant, replicable heritability. The structural asymmetries identified, and their variabilities and heritability provide a reference resource for future studies on the genetic basis of brain asymmetry and altered laterality in cognitive, neurological, and psychiatric disorders.

    Additional information

    pnas.1718418115.sapp.pdf
  • Hu, C.-P., Kong, X., Wagenmakers, E.-J., Ly, A., & Peng, K. (2018). The Bayes factor and its implementation in JASP: A practical primer. Advances in Psychological Science, 26(6), 951-965. doi:10.3724/SP.J.1042.2018.00951.

    Abstract

    Statistical inference plays a critical role in modern scientific research, however, the dominant method for statistical inference in science, null hypothesis significance testing (NHST), is often misunderstood and misused, which leads to unreproducible findings. To address this issue, researchers propose to adopt the Bayes factor as an alternative to NHST. The Bayes factor is a principled Bayesian tool for model selection and hypothesis testing, and can be interpreted as the strength for both the null hypothesis H0 and the alternative hypothesis H1 based on the current data. Compared to NHST, the Bayes factor has the following advantages: it quantifies the evidence that the data provide for both the H0 and the H1, it is not “violently biased” against H0, it allows one to monitor the evidence as the data accumulate, and it does not depend on sampling plans. Importantly, the recently developed open software JASP makes the calculation of Bayes factor accessible for most researchers in psychology, as we demonstrated for the t-test. Given these advantages, adopting the Bayes factor will improve psychological researchers’ statistical inferences. Nevertheless, to make the analysis more reproducible, researchers should keep their data analysis transparent and open.
  • Konopka, A. E., & Bock, K. (2009). Lexical or syntactic control of sentence formulation? Structural generalizations from idiom production. Cognitive Psychology, 58, 68-101. doi:10.1016/j.cogpsych.2008.05.002.

    Abstract

    To compare abstract structural and lexicalist accounts of syntactic processes in sentence formulation, we examined the effectiveness of nonidiomatic and idiomatic phrasal verbs in inducing structural generalizations. Three experiments made use of a syntactic priming paradigm in which participants recalled sentences they had read in rapid serial visual presentation. Prime and target sentences contained phrasal verbs with particles directly following the verb (pull off a sweatshirt) or following the direct object (pull a sweatshirt off). Idiomatic primes used verbs whose figurative meaning cannot be straightforwardly derived from the literal meaning of the main verb (e.g., pull off a robbery) and are commonly treated as stored lexical units. Particle placement in sentences was primed by both nonidiomatic and idiomatic verbs. Experiment 1 showed that the syntax of idiomatic and nonidiomatic phrasal verbs is amenable to priming, and Experiments 2 and 3 compared the priming patterns created by idiomatic and nonidiomatic primes. Despite differences in idiomaticity and structural flexibility, both types of phrasal verbs induced structural generalizations and differed little in their ability to do so. The findings are interpreted in terms of the role of abstract structural processes in language production.
  • Konopka, A. E., & Benjamin, A. (2009). Schematic knowledge changes what judgments of learning predict in a source memory task. Memory & Cognition, 37(1), 42-51. doi:10.3758/MC.37.1.42.

    Abstract

    Source monitoring can be influenced by information that is external to the study context, such as beliefs and general knowledge (Johnson, Hashtroudi, & Lindsay, 1993). We investigated the extent to which metamnemonic judgments predict memory for items and sources when schematic information about the sources is or is not provided at encoding. Participants made judgments of learning (JOLs) to statements presented by two speakers and were informed of the occupation of each speaker either before or after the encoding session. Replicating earlier work, prior knowledge decreased participants' tendency to erroneously attribute statements to schematically consistent but episodically incorrect speakers. The origin of this effect can be understood by examining the relationship between JOLs and performance: JOLs were equally predictive of item and source memory in the absence of prior knowledge, but were exclusively predictive of source memory when participants knew of the relationship between speakers and statements during study. Background knowledge determines the information that people solicit in service of metamnemonic judgments, suggesting that these judgments reflect control processes during encoding that reduce schematic errors.
  • Konopka, A., Meyer, A. S., & Forest, T. A. (2018). Planning to speak in L1 and L2. Cognitive Psychology, 102, 72-104. doi:10.1016/j.cogpsych.2017.12.003.

    Abstract

    The leading theories of sentence planning – Hierarchical Incrementality and Linear Incrementality – differ in their assumptions about the coordination of processes that map preverbal information onto language. Previous studies showed that, in native (L1) speakers, this coordination can vary with the ease of executing the message-level and sentence-level processes necessary to plan and produce an utterance. We report the first series of experiments to systematically examine how linguistic experience influences sentence planning in native (L1) speakers (i.e., speakers with life-long experience using the target language) and non-native (L2) speakers (i.e., speakers with less experience using the target language). In all experiments, speakers spontaneously generated one-sentence descriptions of simple events in Dutch (L1) and English (L2). Analyses of eye-movements across early and late time windows (pre- and post-400 ms) compared the extent of early message-level encoding and the onset of linguistic encoding. In Experiment 1, speakers were more likely to engage in extensive message-level encoding and to delay sentence-level encoding when using their L2. Experiments 2–4 selectively facilitated encoding of the preverbal message, encoding of the agent character (i.e., the first content word in active sentences), and encoding of the sentence verb (i.e., the second content word in active sentences) respectively. Experiment 2 showed that there is no delay in the onset of L2 linguistic encoding when speakers are familiar with the events. Experiments 3 and 4 showed that the delay in the onset of L2 linguistic encoding is not due to speakers delaying encoding of the agent, but due to a preference to encode information needed to select a suitable verb early in the formulation process. Overall, speakers prefer to temporally separate message-level from sentence-level encoding and to prioritize encoding of relational information when planning L2 sentences, consistent with Hierarchical Incrementality
  • Kooijman, V., Hagoort, P., & Cutler, A. (2009). Prosodic structure in early word segmentation: ERP evidence from Dutch ten-month-olds. Infancy, 14, 591 -612. doi:10.1080/15250000903263957.

    Abstract

    Recognizing word boundaries in continuous speech requires detailed knowledge of the native language. In the first year of life, infants acquire considerable word segmentation abilities. Infants at this early stage in word segmentation rely to a large extent on the metrical pattern of their native language, at least in stress-based languages. In Dutch and English (both languages with a preferred trochaic stress pattern), segmentation of strong-weak words develops rapidly between 7 and 10 months of age. Nevertheless, trochaic languages contain not only strong-weak words but also words with a weak-strong stress pattern. In this article, we present electrophysiological evidence of the beginnings of weak-strong word segmentation in Dutch 10-month-olds. At this age, the ability to combine different cues for efficient word segmentation does not yet seem to be completely developed. We provide evidence that Dutch infants still largely rely on strong syllables, even for the segmentation of weak-strong words.
  • Kopecka, A. (2009). L'expression du déplacement en Français: L'interaction des facteurs sémantiques, aspectuels et pragmatiques dans la construction du sens spatial. Langages, 173, 54-75.

    Abstract

    The paper investigates the use of manner verbs (e.g. marcher 'to walk', courir 'to run') with so-called locative prepositions (e.g. dans 'in', sous 'under') in the descriptions of motion in French, as in Il a couru dans le bureau 'He ran in (to) the office', to explore the type of events such constructions express and the factors that influence their interpretation. Based on an extensive corpus survey, the study shows that, contrary to the general claim according to which such constructions express typically motion in some location, they are also frequently used to express change of location. The study discusses the interplay of various factors that contribute to the interpretation of these constructions, including semantic, aspectual and pragmatic factors.
  • Kösem, A., Bosker, H. R., Takashima, A., Meyer, A. S., Jensen, O., & Hagoort, P. (2018). Neural entrainment determines the words we hear. Current Biology, 28, 2867-2875. doi:10.1016/j.cub.2018.07.023.

    Abstract

    Low-frequency neural entrainment to rhythmic input
    has been hypothesized as a canonical mechanism
    that shapes sensory perception in time. Neural
    entrainment is deemed particularly relevant for
    speech analysis, as it would contribute to the extraction
    of discrete linguistic elements from continuous
    acoustic signals. However, its causal influence in
    speech perception has been difficult to establish.
    Here, we provide evidence that oscillations build temporal
    predictions about the duration of speech tokens
    that affect perception. Using magnetoencephalography
    (MEG), we studied neural dynamics during
    listening to sentences that changed in speech rate.
    Weobserved neural entrainment to preceding speech
    rhythms persisting for several cycles after the change
    in rate. The sustained entrainment was associated
    with changes in the perceived duration of the last
    word’s vowel, resulting in the perception of words
    with different meanings. These findings support oscillatory
    models of speech processing, suggesting that
    neural oscillations actively shape speech perception.
  • Koten Jr., J. W., Wood, G., Hagoort, P., Goebel, R., Propping, P., Willmes, K., & Boomsma, D. I. (2009). Genetic contribution to variation in cognitive function: An fMRI study in twins. Science, 323(5922), 1737-1740. doi:10.1126/science.1167371.

    Abstract

    Little is known about the genetic contribution to individual differences in neural networks subserving cognition function. In this functional magnetic resonance imaging (fMRI) twin study, we found a significant genetic influence on brain activation in neural networks supporting digit working memory tasks. Participants activating frontal-parietal networks responded faster than individuals relying more on language-related brain networks.There were genetic influences on brain activation in language-relevant brain circuits that were atypical for numerical working memory tasks as such. This suggests that differences in cognition might be related to brain activation patterns that differ qualitatively among individuals.
  • Kotz, S. A., Ravignani, A., & Fitch, W. T. (2018). The evolution of rhythm processing. Trends in Cognitive Sciences, 22(10), 896-910. doi:10.1016/j.tics.2018.08.002.
  • Kouwenhoven, H., Van Mulken, M., & Ernestus, M. (2018). Communication strategy use by Spanish speakers of English in formal and informal speech. International Journal of Bilingualism, 22(3), 285-305. doi:10.1177/1367006916672946.

    Abstract

    Research questions:

    Are emergent bilinguals sensitive to register variation in their use of communication strategies? What strategies do LX speakers, in casu Spanish speakers of English, use as a function of situational context? What role do individual differences play?
    Methodology:

    This within-speaker study compares Spanish second-language English speakers’ communication strategy use in an informal, peer-to-peer conversation and a formal interview.
    Data and analysis:

    The 15 hours of informal and 9.5 hours of formal speech from the Nijmegen Corpus of Spanish English were coded for 19 different communication strategies.
    Findings/conclusions:

    Overall, speakers prefer self-reliant strategies, which allow them to continue communication without their interlocutor’s help. Of the self-reliant strategies, least effort strategies such as code-switching are used more often in informal speech, whereas relatively more effortful strategies (e.g. reformulations) are used more in informal speech, when the need to be unambiguously understood is felt as more important. Individual differences played a role: some speakers were more affected by a change in formality than others.
    Originality:

    Sensitivity to register variation has not yet been studied within communicative strategy use.
    Implications:

    General principles of communication govern speakers’ strategy selection, notably the protection of positive face and the least effort and cooperative principles.

    Files private

    Request files
  • Kouwenhoven, H., Ernestus, M., & Van Mulken, M. (2018). Register variation by Spanish users of English. The Nijmegen Corpus of Spanish English. Corpus Linguistics and Linguistic Theory, 14(1), 35-63. doi:10.1515/cllt-2013-0054.

    Abstract

    English serves as a lingua franca in situations with varying degrees of
    formality. How formality affects non-native speech has rarely been studied. We
    investigated register variation by Spanish users of English by comparing formal
    and informal speech from the Nijmegen Corpus of Spanish English that we
    created. This corpus comprises speech from thirty-four Spanish speakers of
    English in interaction with Dutch confederates in two speech situations.
    Formality affected the amount of laughter and overlapping speech and the
    number of Spanish words. Moreover, formal speech had a more informational
    character than informal speech. We discuss how our findings relate to register
    variation in Spanish

    Files private

    Request files
  • De Kovel, C. G. F., Lisgo, S. N., Fisher, S. E., & Francks, C. (2018). Subtle left-right asymmetry of gene expression profiles in embryonic and foetal human brains. Scientific Reports, 8: 12606. doi:10.1038/s41598-018-29496-2.

    Abstract

    Left-right laterality is an important aspect of human –and in fact all vertebrate– brain organization for which the genetic basis is poorly understood. Using RNA sequencing data we contrasted gene expression in left- and right-sided samples from several structures of the anterior central nervous systems of post mortem human embryos and foetuses. While few individual genes stood out as significantly lateralized, most structures showed evidence of laterality of their overall transcriptomic profiles. These left-right differences showed overlap with age-dependent changes in expression, indicating lateralized maturation rates, but not consistently in left-right orientation over all structures. Brain asymmetry may therefore originate in multiple locations, or if there is a single origin, it is earlier than 5 weeks post conception, with structure-specific lateralized processes already underway by this age. This pattern is broadly consistent with the weak correlations reported between various aspects of adult brain laterality, such as language dominance and handedness.
  • De Kovel, C. G. F., Lisgo, S. N., & Francks, C. (2018). Transcriptomic analysis of left-right differences in human embryonic forebrain and midbrain. Scientific Data, 5: 180164. doi:10.1038/sdata.2018.164.

    Abstract

    Left-right asymmetry is subtle but pervasive in the human central nervous system. This asymmetry is initiated early during development, but its mechanisms are poorly known. Forebrains and midbrains were dissected from six human embryos at Carnegie stages 15 or 16, one of which was female. The structures were divided into left and right sides, and RNA was isolated. RNA was sequenced with 100 base-pair paired ends using Illumina Hiseq 4000. After quality control, five paired brain sides were available for midbrain and forebrain. A paired analysis between left- and right sides of a given brain structure across the embryos identified left-right differences. The dataset, consisting of Fastq files and a read count table, can be further used to study early development of the human brain
  • Kreuzer, H. (Ed.). (1971). Methodische Perspektiven [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (1/2).
  • Kuerbitz, J., Arnett, M., Ehrman, S., Williams, M. T., Voorhees, C. V., Fisher, S. E., Garratt, A. N., Muglia, L. J., Waclaw, R. R., & Campbell, K. (2018). Loss of intercalated cells (ITCs) in the mouse amygdala of Tshz1 mutants correlates with fear, depression and social interaction phenotypes. The Journal of Neuroscience, 38, 1160-1177. doi:10.1523/JNEUROSCI.1412-17.2017.

    Abstract

    The intercalated cells (ITCs) of the amygdala have been shown to be critical regulatory components of amygdalar circuits, which control appropriate fear responses. Despite this, the molecular processes guiding ITC development remain poorly understood. Here we establish the zinc finger transcription factor Tshz1 as a marker of ITCs during their migration from the dorsal lateral ganglionic eminence through maturity. Using germline and conditional knock-out (cKO) mouse models, we show that Tshz1 is required for the proper migration and differentiation of ITCs. In the absence of Tshz1, migrating ITC precursors fail to settle in their stereotypical locations encapsulating the lateral amygdala and BLA. Furthermore, they display reductions in the ITC marker Foxp2 and ectopic persistence of the dorsal lateral ganglionic eminence marker Sp8. Tshz1 mutant ITCs show increased cell death at postnatal time points, leading to a dramatic reduction by 3 weeks of age. In line with this, Foxp2-null mutants also show a loss of ITCs at postnatal time points, suggesting that Foxp2 may function downstream of Tshz1 in the maintenance of ITCs. Behavioral analysis of male Tshz1 cKOs revealed defects in fear extinction as well as an increase in floating during the forced swim test, indicative of a depression-like phenotype. Moreover, Tshz1 cKOs display significantly impaired social interaction (i.e., increased passivity) regardless of partner genetics. Together, these results suggest that Tshz1 plays a critical role in the development of ITCs and that fear, depression-like and social behavioral deficits arise in their absence. SIGNIFICANCE STATEMENT We show here that the zinc finger transcription factor Tshz1 is expressed during development of the intercalated cells (ITCs) within the mouse amygdala. These neurons have previously been shown to play a crucial role in fear extinction. Tshz1 mouse mutants exhibit severely reduced numbers of ITCs as a result of abnormal migration, differentiation, and survival of these neurons. Furthermore, the loss of ITCs in mouse Tshz1 mutants correlates well with defects in fear extinction as well as the appearance of depression-like and abnormal social interaction behaviors reminiscent of depressive disorders observed in human patients with distal 18q deletions, including the Tshz1 locus.
  • Kurt, S., Groszer, M., Fisher, S. E., & Ehret, G. (2009). Modified sound-evoked brainstem potentials in Foxp2 mutant mice. Brain Research, 1289, 30-36. doi:10.1016/j.brainres.2009.06.092.

    Abstract

    Heterozygous mutations of the human FOXP2 gene cause a developmental disorder involving impaired learning and production of fluent spoken language. Previous investigations of its aetiology have focused on disturbed function of neural circuits involved in motor control. However, Foxp2 expression has been found in the cochlea and auditory brain centers and deficits in auditory processing could contribute to difficulties in speech learning and production. Here, we recorded auditory brainstem responses (ABR) to assess two heterozygous mouse models carrying distinct Foxp2 point mutations matching those found in humans with FOXP2-related speech/language impairment. Mice which carry a Foxp2-S321X nonsense mutation, yielding reduced dosage of Foxp2 protein, did not show systematic ABR differences from wildtype littermates. Given that speech/language disorders are observed in heterozygous humans with similar nonsense mutations (FOXP2-R328X), our findings suggest that auditory processing deficits up to the midbrain level are not causative for FOXP2-related language impairments. Interestingly, however, mice harboring a Foxp2-R552H missense mutation displayed systematic alterations in ABR waves with longer latencies (significant for waves I, III, IV) and smaller amplitudes (significant for waves I, IV) suggesting that either the synchrony of synaptic transmission in the cochlea and in auditory brainstem centers is affected, or fewer auditory nerve fibers and fewer neurons in auditory brainstem centers are activated compared to wildtypes. Therefore, the R552H mutation uncovers possible roles for Foxp2 in the development and/or function of the auditory system. Since ABR audiometry is easily accessible in humans, our data call for systematic testing of auditory functions in humans with FOXP2 mutations.
  • Kuzla, C. (2003). Prosodically-conditioned variation in the realization of domain-final stops and voicing assimilation of domain-initial fricatives in German. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2829-2832). Adelaide: Causal Productions.
  • Lai, V. T., Curran, T., & Menn, L. (2009). Comprehending conventional and novel metaphors: An ERP study. Brain Research, 1284, 145-155. doi:10.1016/j.brainres.2009.05.088.
  • Lai, C. S. L., Gerrelli, D., Monaco, A. P., Fisher, S. E., & Copp, A. J. (2003). FOXP2 expression during brain development coincides with adult sites of pathology in a severe speech and language disorder. Brain, 126(11), 2455-2462. doi:10.1093/brain/awg247.

    Abstract

    Disruption of FOXP2, a gene encoding a forkhead-domain transcription factor, causes a severe developmental disorder of verbal communication, involving profound articulation deficits, accompanied by linguistic and grammatical impairments. Investigation of the neural basis of this disorder has been limited previously to neuroimaging of affected children and adults. The discovery of the gene responsible, FOXP2, offers a unique opportunity to explore the relevant neural mechanisms from a molecular perspective. In the present study, we have determined the detailed spatial and temporal expression pattern of FOXP2 mRNA in the developing brain of mouse and human. We find expression in several structures including the cortical plate, basal ganglia, thalamus, inferior olives and cerebellum. These data support a role for FOXP2 in the development of corticostriatal and olivocerebellar circuits involved in motor control. We find intriguing concordance between regions of early expression and later sites of pathology suggested by neuroimaging. Moreover, the homologous pattern of FOXP2/Foxp2 expression in human and mouse argues for a role for this gene in development of motor-related circuits throughout mammalian species. Overall, this study provides support for the hypothesis that impairments in sequencing of movement and procedural learning might be central to the FOXP2-related speech and language disorder.
  • Lai, C. S. L., Fisher, S. E., Hurst, J. A., Levy, E. R., Hodgson, S., Fox, M., Jeremiah, S., Povey, S., Jamison, D. C., Green, E. D., Vargha-Khadem, F., & Monaco, A. P. (2000). The SPCH1 region on human 7q31: Genomic characterization of the critical interval and localization of translocations associated with speech and language disorder. American Journal of Human Genetics, 67(2), 357-368. doi:10.1086/303011.

    Abstract

    The KE family is a large three-generation pedigree in which half the members are affected with a severe speech and language disorder that is transmitted as an autosomal dominant monogenic trait. In previously published work, we localized the gene responsible (SPCH1) to a 5.6-cM region of 7q31 between D7S2459 and D7S643. In the present study, we have employed bioinformatic analyses to assemble a detailed BAC-/PAC-based sequence map of this interval, containing 152 sequence tagged sites (STSs), 20 known genes, and >7.75 Mb of completed genomic sequence. We screened the affected chromosome 7 from the KE family with 120 of these STSs (average spacing <100 kb), but we did not detect any evidence of a microdeletion. Novel polymorphic markers were generated from the sequence and were used to further localize critical recombination breakpoints in the KE family. This allowed refinement of the SPCH1 interval to a region between new markers 013A and 330B, containing ∼6.1 Mb of completed sequence. In addition, we have studied two unrelated patients with a similar speech and language disorder, who have de novo translocations involving 7q31. Fluorescence in situ hybridization analyses with BACs/PACs from the sequence map localized the t(5;7)(q22;q31.2) breakpoint in the first patient (CS) to a single clone within the newly refined SPCH1 interval. This clone contains the CAGH44 gene, which encodes a brain-expressed protein containing a large polyglutamine stretch. However, we found that the t(2;7)(p23;q31.3) breakpoint in the second patient (BRD) resides within a BAC clone mapping >3.7 Mb distal to this, outside the current SPCH1 critical interval. Finally, we investigated the CAGH44 gene in affected individuals of the KE family, but we found no mutations in the currently known coding sequence. These studies represent further steps toward the isolation of the first gene to be implicated in the development of speech and language.
  • Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., Baguley, T., Becker, R. B., Benning, S. D., Bradford, D. E., Buchanan, E. M., Caldwell, A. R., Van Calster, B., Carlsson, R., Chen, S.-C., Chung, B., Colling, L. J., Collins, G. S., Crook, Z., Cross, E. S. and 68 moreLakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., Baguley, T., Becker, R. B., Benning, S. D., Bradford, D. E., Buchanan, E. M., Caldwell, A. R., Van Calster, B., Carlsson, R., Chen, S.-C., Chung, B., Colling, L. J., Collins, G. S., Crook, Z., Cross, E. S., Daniels, S., Danielsson, H., DeBruine, L., Dunleavy, D. J., Earp, B. D., Feist, M. I., Ferrelle, J. D., Field, J. G., Fox, N. W., Friesen, A., Gomes, C., Gonzalez-Marquez, M., Grange, J. A., Grieve, A. P., Guggenberger, R., Grist, J., Van Harmelen, A.-L., Hasselman, F., Hochard, K. D., Hoffarth, M. R., Holmes, N. P., Ingre, M., Isager, P. M., Isotalus, H. K., Johansson, C., Juszczyk, K., Kenny, D. A., Khalil, A. A., Konat, B., Lao, J., Larsen, E. G., Lodder, G. M. A., Lukavský, J., Madan, C. R., Manheim, D., Martin, S. R., Martin, A. E., Mayo, D. G., McCarthy, R. J., McConway, K., McFarland, C., Nio, A. Q. X., Nilsonne, G., De Oliveira, C. L., De Xivry, J.-J.-O., Parsons, S., Pfuhl, G., Quinn, K. A., Sakon, J. J., Saribay, S. A., Schneider, I. K., Selvaraju, M., Sjoerds, Z., Smith, S. G., Smits, T., Spies, J. R., Sreekumar, V., Steltenpohl, C. N., Stenhouse, N., Świątkowski, W., Vadillo, M. A., Van Assen, M. A. L. M., Williams, M. N., Williams, S. E., Williams, D. R., Yarkoni, T., Ziano, I., & Zwaan, R. A. (2018). Justify your alpha. Nature Human Behaviour, 2, 168-171. doi:10.1038/s41562-018-0311-x.

    Abstract

    In response to recommendations to redefine statistical significance to P ≤ 0.005, we propose that researchers should transparently report and justify all choices they make when designing a study, including the alpha level.
  • Lam, N. H. L., Hulten, A., Hagoort, P., & Schoffelen, J.-M. (2018). Robust neuronal oscillatory entrainment to speech displays individual variation in lateralisation. Language, Cognition and Neuroscience, 33(8), 943-954. doi:10.1080/23273798.2018.1437456.

    Abstract

    Neural oscillations may be instrumental for the tracking and segmentation of continuous speech. Earlier work has suggested that delta, theta and gamma oscillations entrain to the speech rhythm. We used magnetoencephalography and a large sample of 102 participants to investigate oscillatory entrainment to speech, and observed robust entrainment of delta and theta activity, and weak group-level gamma entrainment. We show that the peak frequency and the hemispheric lateralisation of the entrainment are subject to considerable individual variability. The first finding may support the involvement of intrinsic oscillations in entrainment, and the second finding suggests that there is no systematic default right-hemispheric bias for processing acoustic signals on a slow time scale. Although low frequency entrainment to speech is a robust phenomenon, the characteristics of entrainment vary across individuals, and this variation is important for understanding the underlying neural mechanisms of entrainment, as well as its functional significance.
  • De Lange, F. P., Hagoort, P., & Toni, I. (2003). Differential fronto-parietal contributions to visual and motor imagery. NeuroImage, 19(2), e2094-e2095.

    Abstract

    Mental imagery is a cognitive process crucial to human reasoning. Numerous studies have characterized specific
    instances of this cognitive ability, as evoked by visual imagery (VI) or motor imagery (MI) tasks. However, it
    remains unclear which neural resources are shared between VI and MI, and which are exclusively related to MI.
    To address this issue, we have used fMRI to measure human brain activity during performance of VI and MI
    tasks. Crucially, we have modulated the imagery process by manipulating the degree of mental rotation necessary
    to solve the tasks. We focused our analysis on changes in neural signal as a function of the degree of mental
    rotation in each task.
  • De Lange, F. P., Koers, A., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Meer, J. W. M., & Toni, I. (2009). Reply to: "Can CBT substantially change grey matter volume in chronic fatigue syndrome" [Letter to the editor]. Brain, 132(6), e111. doi:10.1093/brain/awn208.
  • De Lange, F., Bleijenberg, G., Van der Meer, J. W. M., Hagoort, P., & Toni, I. (2009). Reply: Change in grey matter volume cannot be assumed to be due to cognitive behavioural therapy [Letter to the editor]. Brain, 132(7), e120. doi:10.1093/brain/awn359.
  • De Lange, F. P., Knoop, H., Bleijenberg, G., Van der Meer, J. W. M., Hagoort, P., & Toni, I. (2009). The experience of fatigue in the brain [Letter to the editor]. Psychological Medicine, 39, 523-524. doi:10.1017/S0033291708004844.
  • Lansner, A., Sandberg, A., Petersson, K. M., & Ingvar, M. (2000). On forgetful attractor network memories. In H. Malmgren, M. Borga, & L. Niklasson (Eds.), Artificial neural networks in medicine and biology: Proceedings of the ANNIMAB-1 Conference, Göteborg, Sweden, 13-16 May 2000 (pp. 54-62). Heidelberg: Springer Verlag.

    Abstract

    A recurrently connected attractor neural network with a Hebbian learning rule is currently our best ANN analogy for a piece cortex. Functionally biological memory operates on a spectrum of time scales with regard to induction and retention, and it is modulated in complex ways by sub-cortical neuromodulatory systems. Moreover, biological memory networks are commonly believed to be highly distributed and engage many co-operating cortical areas. Here we focus on the temporal aspects of induction and retention of memory in a connectionist type attractor memory model of a piece of cortex. A continuous time, forgetful Bayesian-Hebbian learning rule is described and compared to the characteristics of LTP and LTD seen experimentally. More generally, an attractor network implementing this learning rule can operate as a long-term, intermediate-term, or short-term memory. Modulation of the print-now signal of the learning rule replicates some experimental memory phenomena, like e.g. the von Restorff effect.
  • Lattenkamp, E. Z., Kaiser, S., Kaucic, R., Großmann, M., Koselj, K., & Goerlitz, H. R. (2018). Environmental acoustic cues guide the biosonar attention of a highly specialised echolocator. Journal of Experimental Biology, 221(8): jeb165696. doi:10.1242/jeb.165696.

    Abstract

    Sensory systems experience a trade-off between maximizing the
    detail and amount of sampled information. Thistrade-off is particularly
    pronounced in sensorysystemsthat are highlyspecialised fora single
    task and thus experience limitations in other tasks. We hypothesised
    that combining sensory input from multiple streams of information
    may resolve this trade-off and improve detection and sensing
    reliability. Specifically, we predicted that perceptive limitations
    experienced by animals reliant on specialised active echolocation
    can be compensated for by the phylogenetically older and less
    specialised process of passive hearing. We tested this hypothesis in
    greater horseshoe bats, which possess morphological and neural
    specialisations allowing them to identify fluttering prey in dense
    vegetation using echolocation only. At the same time, their
    echolocation system is both spatially and temporally severely
    limited. Here, we show that greater horseshoe bats employ passive
    hearing to initially detect and localise prey-generated and other
    environmental sounds, and then raise vocalisation level and
    concentrate the scanning movements of their sonar beam on the
    sound source for further investigation with echolocation. These
    specialised echolocators thus supplement echo-acoustic information
    with environmental acoustic cues, enlarging perceived space beyond
    their biosonar range. Contrary to our predictions, we did not find
    consistent preferences for prey-related acoustic stimuli, indicating the
    use of passive acoustic cues also for detection of non-prey objects.
    Our findings suggest that even specialised echolocators exploit a
    wide range of environmental information, and that phylogenetically
    older sensory systems can support the evolution of sensory
    specialisations by compensating for their limitations.
  • Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2018). Mammalian models for the study of vocal learning: A new paradigm in bats. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 235-237). Toruń, Poland: NCU Press. doi:10.12775/3991-1.056.
  • Lattenkamp, E. Z., & Vernes, S. C. (2018). Vocal learning: A language-relevant trait in need of a broad cross-species approach. Current Opinion in Behavioral Sciences, 21, 209-215. doi:10.1016/j.cobeha.2018.04.007.

    Abstract

    Although humans are unmatched in their capacity to produce
    speech and learn language, comparative approaches in diverse
    animalmodelsareabletoshedlightonthebiologicalunderpinnings
    of language-relevant traits. In the study of vocal learning, a trait
    crucial for spoken language, passerine birds have been the
    dominant models, driving invaluable progress in understanding the
    neurobiology and genetics of vocal learning despite being only
    distantly related to humans. To date, there is sparse evidence that
    our closest relatives, nonhuman primates have the capability to
    learn new vocalisations. However, a number of other mammals
    have shown the capacity for vocal learning, such as some
    cetaceans, pinnipeds, elephants, and bats, and we anticipate that
    with further study more species will gain membership to this
    (currently) select club. A broad, cross-species comparison of vocal
    learning, coupled with careful consideration of the components
    underlying this trait, is crucial to determine how human speech and
    spoken language is biologically encoded and how it evolved. We
    emphasise the need to draw on the pool of promising species that
    havethusfarbeenunderstudiedorneglected.Thisisbynomeansa
    call for fewer studies in songbirds, or an unfocused treasure-hunt,
    but rather an appeal for structured comparisons across a range of
    species, considering phylogenetic relationships, ecological and
    morphological constrains, developmental and social factors, and
    neurogenetic underpinnings. Herein, we promote a comparative
    approachhighlightingtheimportanceofstudyingvocallearningina
    broad range of model species, and describe a common framework
    for targeted cross-taxon studies to shed light on the biology and
    evolution of vocal learning.
  • Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2018). Volitional control of social vocalisations and vocal usage learning in bats. Journal of Experimental Biology, 221(14): jeb.180729. doi:10.1242/jeb.180729.

    Abstract

    Bats are gregarious, highly vocal animals that possess a broad repertoire of social vocalisations. For in-depth studies of their vocal behaviours, including vocal flexibility and vocal learning, it is necessary to gather repeatable evidence from controlled laboratory experiments on isolated individuals. However, such studies are rare for one simple reason: eliciting social calls in isolation and under operant control is challenging and has rarely been achieved. To overcome this limitation, we designed an automated setup that allows conditioning of social vocalisations in a new context, and tracks spectro-temporal changes in the recorded calls over time. Using this setup, we were able to reliably evoke social calls from temporarily isolated lesser spear-nosed bats (Phyllostomus discolor). When we adjusted the call criteria that could result in food reward, bats responded by adjusting temporal and spectral call parameters. This was achieved without the help of an auditory template or social context to direct the bats. Our results demonstrate vocal flexibility and vocal usage learning in bats. Our setup provides a new paradigm that allows the controlled study of the production and learning of social vocalisations in isolated bats, overcoming limitations that have, until now, prevented in-depth studies of these behaviours.

    Additional information

    JEB180729supp.pdf
  • Lausberg, H., Cruz, R. F., Kita, S., Zaidel, E., & Ptito, A. (2003). Pantomime to visual presentation of objects: Left hand dyspraxia in patients with complete callosotomy. Brain, 126(2), 343-360. doi:10.1093/brain/awg042.

    Abstract

    Investigations of left hand praxis in imitation and object use in patients with callosal disconnection have yielded divergent results, inducing a debate between two theoretical positions. Whereas Liepmann suggested that the left hemisphere is motor dominant, others maintain that both hemispheres have equal motor competences and propose that left hand apraxia in patients with callosal disconnection is secondary to left hemispheric specialization for language or other task modalities. The present study aims to gain further insight into the motor competence of the right hemisphere by investigating pantomime of object use in split-brain patients. Three patients with complete callosotomy and, as control groups, five patients with partial callosotomy and nine healthy subjects were examined for their ability to pantomime object use to visual object presentation and demonstrate object manipulation. In each condition, 11 objects were presented to the subjects who pantomimed or demonstrated the object use with either hand. In addition, six object pairs were presented to test bimanual coordination. Two independent raters evaluated the videotaped movement demonstrations. While object use demonstrations were perfect in all three groups, the split-brain patients displayed apraxic errors only with their left hands in the pantomime condition. The movement analysis of concept and execution errors included the examination of ipsilateral versus contralateral motor control. As the right hand/left hemisphere performances demonstrated retrieval of the correct movement concepts, concept errors by the left hand were taken as evidence for right hemisphere control. Several types of execution errors reflected a lack of distal motor control indicating the use of ipsilateral pathways. While one split-brain patient controlled his left hand predominantly by ipsilateral pathways in the pantomime condition, the error profile in the other two split-brain patients suggested that the right hemisphere controlled their left hands. In the object use condition, in all three split-brain patients fine-graded distal movements in the left hand indicated right hemispheric control. Our data show left hand apraxia in split-brain patients is not limited to verbal commands, but also occurs in pantomime to visual presentation of objects. As the demonstration with object in hand was unimpaired in either hand, both hemispheres must contain movement concepts for object use. However, the disconnected right hemisphere is impaired in retrieving the movement concept in response to visual object presentation, presumably because of a deficit in associating perceptual object representation with the movement concepts.

Share this page