Publications

Displaying 201 - 300 of 471
  • Klein, W. (1993). Ellipse. In J. Jacobs, A. von Stechow, W. Sternefeld, & T. Vennemann (Eds.), Syntax: Ein internationales Handbuch zeitgenössischer Forschung [1. Halbband] (pp. 763-799). Berlin: de Gruyter.
  • Klein, W. (1998). Assertion and finiteness. In N. Dittmar, & Z. Penner (Eds.), Issues in the theory of language acquisition: Essays in honor of Jürgen Weissenborn (pp. 225-245). Bern: Peter Lang.
  • Klein, W. (1967). Einführende Bibliographie zu "Mathematik und Dichtung". In H. Kreuzer, & R. Gunzenhäuser (Eds.), Mathematik und Dichtung (pp. 347-359). München: Nymphenburger.
  • Klein, W. (1994). Für eine rein zeitliche Deutung von Tempus und Aspekt. In R. Baum (Ed.), Lingua et Traditio: Festschrift für Hans Helmut Christmann zum 65. Geburtstag (pp. 409-422). Tübingen: Narr.
  • Klein, W. (1994). Keine Känguruhs zur Linken: Über die Variabilität von Raumvorstellungen und ihren Ausdruck in der Sprache. In H.-J. Kornadt, J. Grabowski, & R. Mangold-Allwinn (Eds.), Sprache und Kognition (pp. 163-182). Heidelberg, Berlin, Oxford: Spektrum.
  • Klein, W. (1993). L'Expression de la spatialité dans le langage humain. In M. Denis (Ed.), Images et langages (pp. 73-85). Paris: CNRS.
  • Klein, W. (1993). Learner varieties and theoretical linguistics. In C. Perdue (Ed.), Adult language acquisition: Cross-linguistic perspectives. Cambridge: Cambridge University Press.
  • Klein, W. (1994). Learning how to express temporality in a second language. In A. G. Ramat, & M. Vedovelli (Eds.), Società di linguistica Italiana, SLI 34: Italiano - lingua seconda/lingua straniera: Atti del XXVI Congresso (pp. 227-248). Roma: Bulzoni.
  • Klein, W. (2015). Lexicology and lexicography. In J. D. Wright (Ed.), International Encyclopedia of the Social & Behavioral Sciences (2nd ed.) Vol. 13 (pp. 938-942). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.53059-1.
  • Klein, W. (1991). Seven trivia of language acquisition. In L. Eubank (Ed.), Point counterpoint: Universal grammar in the second language (pp. 49-70). Amsterdam: Benjamins.
  • Klein, W. (1991). SLA theory: Prolegomena to a theory of language acquisition and implications for Theoretical Linguistics. In T. Huebner, & C. Ferguson (Eds.), Crosscurrents in second language acquisition and linguistic theories (pp. 169-194). Amsterdam: Benjamins.
  • Klein, W. (1993). Some notorious pitfalls in the analysis of spatial expressions. In F. Beckman, & G. Heyer (Eds.), Theorie und Praxis des Lexikons (pp. 191-204). Berlin: de Gruyter.
  • Klein, W., & Vater, H. (1998). The perfect in English and German. In L. Kulikov, & H. Vater (Eds.), Typology of verbal categories: Papers presented to Vladimir Nedjalkov on the occasion of his 70th birthday (pp. 215-235). Tübingen: Niemeyer.
  • Klein, W., & Perdue, C. (1993). Utterance structure. In C. Perdue (Ed.), Adult language acquisition: Cross-linguistic perspectives: Vol. 2 The results (pp. 3-40). Cambridge: Cambridge University Press.
  • Koch, X., & Janse, E. (2015). Effects of age and hearing loss on articulatory precision for sibilants. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    This study investigates the effects of adult age and speaker abilities on articulatory precision for sibilant productions. Normal-hearing young adults with
    better sibilant discrimination have been shown to produce greater spectral sibilant contrasts. As reduced auditory feedback may gradually impact on feedforward
    commands, we investigate whether articulatory precision as indexed by spectral mean for [s] and [S] decreases with age, and more particularly with agerelated
    hearing loss. Younger, middle-aged and older adults read aloud words starting with the sibilants [s] or [S]. Possible effects of cognitive, perceptual, linguistic and sociolinguistic background variables
    on the sibilants’ acoustics were also investigated. Sibilant contrasts were less pronounced for male than female speakers. Most importantly, for the fricative
    [s], the spectral mean was modulated by individual high-frequency hearing loss, but not age. These results underscore that even mild hearing loss already affects articulatory precision.
  • Kooijman, V. (2007). Continuous-speech segmentation at the beginning of language acquisition: Electrophysiological evidence. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    Word segmentation, or detecting word boundaries in continuous speech, is not an easy task. Spoken language does not contain silences to indicate word boundaries and words partly overlap due to coarticalution. Still, adults listening to their native language perceive speech as individual words. They are able to combine different distributional cues in the language, such as the statistical distribution of sounds and metrical cues, with lexical information, to efficiently detect word boundaries. Infants in the first year of life do not command these cues. However, already between seven and ten months of age, before they know word meaning, infants learn to segment words from speech. This important step in language acquisition is the topic of this dissertation. In chapter 2, the first Event Related Brain Potential (ERP) study on word segmentation in Dutch ten-month-olds is discussed. The results show that ten-month-olds can already segment words with a strong-weak stress pattern from speech and they need roughly the first half of a word to do so. Chapter 3 deals with segmentation of words beginning with a weak syllable, as a considerable number of words in Dutch do not follow the predominant strong-weak stress pattern. The results show that ten-month-olds still largely rely on the strong syllable in the language, and do not show an ERP response to the initial weak syllable. In chapter 4, seven-month-old infants' segmentation of strong-weak words was studied. An ERP response was found to strong-weak words presented in sentences. However, a behavioral response was not found in an additional Headturn Preference Procedure study. There results suggest that the ERP response is a precursor to the behavioral response that infants show at a later age.

    Additional information

    full text via Radboud Repository
  • Kruspe, N., Burenhult, N., & Wnuk, E. (2015). Northern Aslian. In P. Sidwell, & M. Jenny (Eds.), Handbook of Austroasiatic Languages (pp. 419-474). Leiden: Brill.
  • Kuijpers, C. T., Coolen, R., Houston, D., & Cutler, A. (1998). Using the head-turning technique to explore cross-linguistic performance differences. In C. Rovee-Collier, L. Lipsitt, & H. Hayne (Eds.), Advances in infancy research: Vol. 12 (pp. 205-220). Stamford: Ablex.
  • Kuzla, C., & Ernestus, M. (2007). Prosodic conditioning of phonetic detail of German plosives. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 461-464). Dudweiler: Pirrot.

    Abstract

    The present study investigates the influence of prosodic structure on the fine-grained phonetic details of German plosives which also cue the phonological fortis-lenis contrast. Closure durations were found to be longer at higher prosodic boundaries. There was also less glottal vibration in lenis plosives at higher prosodic boundaries. Voice onset time in lenis plosives was not affected by prosody. In contrast, for the fortis plosives VOT decreased at higher boundaries, as did the maximal intensity of the release. These results demonstrate that the effects of prosody on different phonetic cues can go into opposite directions, but are overall constrained by the need to maintain phonological contrasts. While prosodic effects on some cues are compatible with a ‘fortition’ account of prosodic strengthening or with a general feature enhancement explanation, the effects on others enhance paradigmatic contrasts only within a given prosodic position.
  • Lai, V. T., Chang, M., Duffield, C., Hwang, J., Xue, N., & Palmer, M. (2007). Defining a methodology for mapping Chinese and English sense inventories. In Proceedings of the 8th Chinese Lexical Semantics Workshop 2007 (CLSW 2007). The Hong Kong Polytechnic University, Hong Kong, May 21-23 (pp. 59-65).

    Abstract

    In this study, we explored methods for linking Chinese and English sense inventories using two opposing approaches: creating links (1) bottom-up: by starting at the finer-grained sense level then proceeding to the verb subcategorization frames and (2) top-down: by starting directly with the more coarse-grained frame levels. The sense inventories for linking include pre-existing corpora, such as English Propbank (Palmer, Gildea, and Kingsbury, 2005), Chinese Propbank (Xue and Palmer, 2004) and English WordNet (Fellbaum, 1998) and newly created corpora, the English and Chinese Sense Inventories from DARPA-GALE OntoNotes. In the linking task, we selected a group of highly frequent and polysemous communication verbs, including say, ask, talk, and speak in English, and shuo, biao-shi, jiang, and wen in Chinese. We found that with the bottom-up method, although speakers of both languages agreed on the links between senses, the subcategorization frames of the corresponding senses did not match consistently. With the top-down method, if the verb frames match in both languages, their senses line up more quickly to each other. The results indicate that the top-down method is more promising in linking English and Chinese sense inventories.
  • Lai, V. T., & Narasimhan, B. (2015). Verb representation and thinking-for-speaking effects in Spanish-English bilinguals. In R. G. De Almeida, & C. Manouilidou (Eds.), Cognitive science perspectives on verb representation and processing (pp. 235-256). Cham: Springer.

    Abstract

    Speakers of English habitually encode motion events using manner-of-motion verbs (e.g., spin, roll, slide) whereas Spanish speakers rely on path-of-motion verbs (e.g., enter, exit, approach). Here, we ask whether the language-specific verb representations used in encoding motion events induce different modes of “thinking-for-speaking” in Spanish–English bilinguals. That is, assuming that the verb encodes the most salient information in the clause, do bilinguals find the path of motion to be more salient than manner of motion if they had previously described the motion event using Spanish versus English? In our study, Spanish–English bilinguals described a set of target motion events in either English or Spanish and then participated in a nonlinguistic similarity judgment task in which they viewed the target motion events individually (e.g., a ball rolling into a cave) followed by two variants a “same-path” variant such as a ball sliding into a cave or a “same-manner” variant such as a ball rolling away from a cave). Participants had to select one of the two variants that they judged to be more similar to the target event: The event that shared the same path of motion as the target versus the one that shared the same manner of motion. Our findings show that bilingual speakers were more likely to classify two motion events as being similar if they shared the same path of motion and if they had previously described the target motion events in Spanish versus in English. Our study provides further evidence for the “thinking-for-speaking” hypothesis by demonstrating that bilingual speakers can flexibly shift between language-specific construals of the same event “on-the-fly.”
  • Lehecka, T. (2015). Collocation and colligation. In J.-O. Östman, & J. Verschueren (Eds.), Handbook of Pragmatics Online. Amsterdam: Benjamins. doi:10.1075/hop.19.col2.
  • Lev-Ari, S. (2015). Adjusting the manner of language processing to the social context: Attention allocation during interactions with non-native speakers. In R. K. Mishra, N. Srinivasan, & F. Huettig (Eds.), Attention and Vision in Language Processing (pp. 185-195). New York: Springer. doi:10.1007/978-81-322-2443-3_11.
  • Lev-Ari, S. (2019). The influence of social network properties on language processing and use. In M. S. Vitevitch (Ed.), Network Science in Cognitive Psychology (pp. 10-29). New York, NY: Routledge.

    Abstract

    Language is a social phenomenon. The author learns, processes, and uses it in social contexts. In other words, the social environment shapes the linguistic knowledge and use of the knowledge. To a degree, this is trivial. A child exposed to Japanese will become fluent in Japanese, whereas a child exposed to only Spanish will not understand Japanese but will master the sounds, vocabulary, and grammar of Spanish. Language is a structured system. Sounds and words do not occur randomly but are characterized by regularities. Learners are sensitive to these regularities and exploit them when learning language. People differ in the sizes of their social networks. Some people tend to interact with only a few people, whereas others might interact with a wide range of people. This is reflected in people’s holiday greeting habits: some people might send cards to only a few people, whereas other would send greeting cards to more than 350 people.
  • Levelt, W. J. M. (1994). Psycholinguistics. In A. M. Colman (Ed.), Companion Encyclopedia of Psychology: Vol. 1 (pp. 319-337). London: Routledge.

    Abstract

    Linguistic skills are primarily tuned to the proper conduct of conversation. The innate ability to converse has provided species with a capacity to share moods, attitudes, and information of almost any kind, to assemble knowledge and skills, to plan coordinated action, to educate its offspring, in short, to create and transmit culture. In conversation the interlocutors are involved in negotiating meaning. Speaking is most complex cognitive-motor skill. It involves the conception of an intention, the selection of information whose expression will make that intention recognizable, the selection of appropriate words, the construction of a syntactic framework, the retrieval of the words’ sound forms, and the computation of an articulatory plan for each word and for the utterance as a whole. The question where communicative intentions come from is a psychodynamic question rather than a psycholinguistic one. Speaking is a form of social action, and it is in the context of action that intentions, goals, and subgoals develop.
  • Levelt, W. J. M. (1993). Die konnektionistische Mode. In J. Engelkamp, & T. Pechmann (Eds.), Mentale Repräsentation (pp. 51-62). Bern: Huber Verlag.
  • Levelt, W. J. M. (1993). Accessing words in speech production: Stages, processes and representations. In W. J. M. Levelt (Ed.), Lexical access in speech production (pp. 1-22). Cambridge, MA: Blackwell Publishers.

    Abstract

    Originally published in Cognition International Journal of Cognitive Science, Volume 42, Numbers 1-3, 1992 This paper introduces a special issue of Cognition 011 lexical access in speech production. Over the last quarter century, the psycholinguistic study of speaking, and in particular of accessing words in speech, received a major new impetus from the analysis of speech errors, dysfluencies and hesMions, from aphasiology, and from new paradigms in reaction time research. The emerging theoretical picture partitions the accessing process into two subprocesses, the selection of an appropriate lexical item (and "lemma") from the mental lexicon, and the phonological encoding of that item, that is, the computation of a phonetic program for the item in the context of utterance These two theoretical domains are successively introduced by outlining some core issues that have been or still have to be addressed. The final section discusses the controversial question whether phonological encoding can affect lexical selection. This partitioning is also followed in this special issue as a whole. There are, first, four papers on lexical selection, then three papers on phonological encoding, and finally one on the interaction between selection and phonological encoding.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (2007). Levensbericht Detlev W. Ploog. In Levensberichten en herdenkingen 2007 (pp. 60-63). Amsterdam: Koninklijke Nederlandse Akademie van Wetenschappen.
  • Levelt, W. J. M. (2015). Levensbericht George Armitage Miller 1920 - 2012. In KNAW levensberichten en herdenkingen 2014 (pp. 38-42). Amsterdam: KNAW.
  • Levelt, W. J. M. (1993). Lexical access in speech production. In E. Reuland, & W. Abraham (Eds.), Knowledge and language: Vol. 1. From Orwell's problem to Plato's problem (pp. 241-251). Dordrecht: Kluwer.
  • Levelt, W. J. M. (1991). Lexical access in speech production: Stages versus cascading. In H. Peters, W. Hulstijn, & C. Starkweather (Eds.), Speech motor control and stuttering (pp. 3-10). Amsterdam: Excerpta Medica.
  • Levelt, W. J. M. (1993). Lexical selection, or how to bridge the major rift in language processing. In F. Beckmann, & G. Heyer (Eds.), Theorie und Praxis des Lexikons (pp. 164-172). Berlin: Walter de Gruyter.
  • Levelt, W. J. M. (1994). On the skill of speaking: How do we access words? In Proceedings ICSLP 94 (pp. 2253-2258). Yokohama: The Acoustical Society of Japan.
  • Levelt, W. J. M. (1994). Onder woorden brengen: Beschouwingen over het spreekproces. In Haarlemse voordrachten: voordrachten gehouden in de Hollandsche Maatschappij der Wetenschappen te Haarlem. Haarlem: Hollandsche maatschappij der wetenschappen.
  • Levelt, W. J. M. (1993). The architecture of normal spoken language use. In G. Blanken, J. Dittman, H. Grimm, J. C. Marshall, & C.-W. Wallesch (Eds.), Linguistic disorders and pathologies: An international handbook (pp. 1-15). Berlin: Walter de Gruyter.
  • Levelt, W. J. M. (2015). Sleeping Beauties. In I. Toivonen, P. Csúrii, & E. Van der Zee (Eds.), Structures in the Mind: Essays on Language, Music, and Cognition in Honor of Ray Jackendoff (pp. 235-255). Cambridge, MA: MIT Press.
  • Levelt, W. J. M. (1993). Spreken als vaardigheid. In C. Blankenstijn, & A. Scheper (Eds.), Taalvaardigheid (pp. 1-16). Dordrecht: ICG Publications.
  • Levelt, W. J. M. (1994). The skill of speaking. In P. Bertelson, P. Eelen, & G. d'Ydewalle (Eds.), International perspectives on psychological science: Vol. 1. Leading themes (pp. 89-103). Hove: Erlbaum.
  • Levelt, W. J. M. (1994). What can a theory of normal speaking contribute to AAC? In ISAAC '94 Conference Book and Proceedings. Hoensbroek: IRV.
  • Levinson, S. C. (2007). Optimizing person reference - perspectives from usage on Rossel Island. In N. Enfield, & T. Stivers (Eds.), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 29-72). Cambridge: Cambridge University Press.

    Abstract

    This chapter explicates the requirement in person–reference for balancing demands for recognition, minimalization, explicitness and indirection. This is illustrated with reference to data from repair of failures of person–reference within a particular linguistic/cultural context, namely casual interaction among Rossel Islanders. Rossel Island (PNG) offers a ‘natural experiment’ for studying aspects of person reference, because of a number of special properties: 1. It is a closed universe of 4000 souls, sharing one kinship network, so in principle anyone could be recognizable from a reference. As a result no (complex) descriptions (cf. ‘ the author of Waverly’) are employed. 2. Names, however, are never uniquely referring, since they are drawn from a fixed pool. They are only used for about 25% of initial references, another 25% of initial references being done by kinship triangulation (‘that man’s father–in–law’). Nearly 50% of initial references are semantically underspecified or vague (e.g. ‘that girl’). 3. There are systematic motivations for oblique reference, e.g. kinship–based taboos and other constraints, which partly account for the underspecified references. The ‘natural experiment’ thus reveals some gneral lessons about how person–reference requires optimizing multiple conflicting constraints. Comparison with Sacks and Schegloff’s (1979) treatment of English person reference suggests a way to tease apart the universal and the culturally–particular.
  • Levinson, S. C. (1994). Deixis. In R. E. Asher (Ed.), Encyclopedia of language and linguistics (pp. 853-857). Oxford: Pergamon Press.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (1991). Deixis. In W. Bright (Ed.), Oxford international encyclopedia of linguistics (pp. 343-344). Oxford University Press.
  • Levinson, S. C., Senft, G., & Majid, A. (2007). Emotion categories in language and thought. In A. Majid (Ed.), Field Manual Volume 10 (pp. 46-52). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492892.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C., & Toni, I. (2019). Key issues and future directions: Interactional foundations of language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 257-261). Cambridge, MA: MIT Press.
  • Levinson, S. C., Majid, A., & Enfield, N. J. (2007). Language of perception: The view from language and culture. In A. Majid (Ed.), Field Manual Volume 10 (pp. 10-21). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.468738.
  • Levinson, S. C. (2019). Interactional foundations of language: The interaction engine hypothesis. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 189-200). Cambridge, MA: MIT Press.
  • Levinson, S. C. (2019). Natural forms of purposeful interaction among humans: What makes interaction effective? In K. A. Gluck, & J. E. Laird (Eds.), Interactive task learning: Humans, robots, and agents acquiring new tasks through natural interactions (pp. 111-126). Cambridge, MA: MIT Press.
  • Levinson, S. C. (1993). Raumkonzeptionen mit absoluten Systemen. In Max Planck Gesellschaft Jahrbuch 1993 (pp. 297-299).
  • Levinson, S. C., & Majid, A. (2007). The language of sound. In A. Majid (Ed.), Field Manual Volume 10 (pp. 29-31). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.468735.
  • Levinson, S. C., & Majid, A. (2007). The language of vision II: Shape. In A. Majid (Ed.), Field Manual Volume 10 (pp. 26-28). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.468732.
  • Levinson, S. C., & Senft, G. (1994). Wie lösen Sprecher von Sprachen mit absoluten und relativen Systemen des räumlichen Verweisens nicht-sprachliche räumliche Aufgaben? In Jahrbuch der Max-Planck-Gesellschaft 1994 (pp. 295-299). München: Generalverwaltung der Max-Planck-Gesellschaft München.
  • Lindström, E., Terrill, A., Reesink, G., & Dunn, M. (2007). The languages of Island Melanesia. In J. S. Friedlaender (Ed.), Genes, language, and culture history in the Southwest Pacific (pp. 118-140). Oxford: Oxford University Press.

    Abstract

    This chapter provides an overview of the Papuan and the Oceanic languages (a branch of Austronesian) in Northern Island Melanesia, as well as phenomena arising through contact between these groups. It shows how linguistics can contribute to the understanding of the history of languages and speakers, and what the findings of those methods have been. The location of the homeland of speakers of Proto-Oceanic is indicated (in northeast New Britain); many facets of the lives of those speakers are shown; and the patterns of their subsequent spread across Island Melanesia and beyond into Remote Oceania are indicated, followed by a second wave overlaying the first into New Guinea and as far as halfway through the Solomon Islands. Regarding the Papuan languages of this region, at least some are older than the 6,000-10,000 ceiling of the Comparative Method, and their relations are explored with the aid of a database of 125 non-lexical structural features. The results reflect archipelago-based clustering with the Central Solomons Papuan languages forming a clade either with the Bismarcks or with Bougainville languages. Papuan languages in Bougainville are less influenced by Oceanic languages than those in the Bismarcks and the Solomons. The chapter considers a variety of scenarios to account for their findings, concluding that the results are compatible with multiple pre-Oceanic waves of arrivals into the area after initial settlement.
  • Liszkowski, U. (2007). Human twelve-month-olds point cooperatively to share interest with and helpfully provide information for a communicative partner. In K. Liebal, C. Müller, & S. Pika (Eds.), Gestural communication in nonhuman and human primates (pp. 124-140). Amsterdam: Benjamins.

    Abstract

    This paper investigates infant pointing at 12 months. Three recent experimental studies from our lab are reported and contrasted with existing accounts on infant communicative and social-cognitive abilities. The new results show that infant pointing at 12 months already is a communicative act which involves the intentional transmission of information to share interest with, or provide information for other persons. It is argued that infant pointing is an inherently social and cooperative act which is used to share psychological relations between interlocutors and environment, repairs misunderstandings in proto-conversational turn-taking, and helps others by providing information. Infant pointing builds on an understanding of others as persons with attentional states and attitudes. Findings do not support lean accounts on early infant pointing which posit that it is initially non-communicative, does not serve the function of indicating, or is purely self-centered. It is suggested to investigate the emergence of reference and the motivation to jointly engage with others also before pointing has emerged.
  • Liszkowski, U., & Brown, P. (2007). Infant pointing (9-15 months) in different cultures. In A. Majid (Ed.), Field Manual Volume 10 (pp. 82-88). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492895.

    Abstract

    There are two tasks for conducting systematic observation of child-caregiver joint attention interactions. Task 1 – a “decorated room” designed to elicit infant and caregiver pointing. Task 2 – videotaped interviews about infant pointing behaviour. The goal of this task is to document the ontogenetic emergence of referential communication in caregiver infant interaction in different cultures, during the critical age of 8-15 months when children come to understand and share others’ intentions. This is of interest to all students of interaction and human communication; it does not require specialist knowledge of children.
  • Little, H., Eryılmaz, K., & de Boer, B. (2015). A new artificial sign-space proxy for investigating the emergence of structure and categories in speech. In The Scottish Consortium for ICPhS 2015 (Ed.), The proceedings of the 18th International Congress of Phonetic Sciences. (ICPhS 2015).
  • Little, H., Eryılmaz, K., & de Boer, B. (2015). Linguistic modality affects the creation of structure and iconicity in signals. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. Jennings, & P. Maglio (Eds.), The 37th annual meeting of the Cognitive Science Society (CogSci 2015) (pp. 1392-1398). Austin, TX: Cognitive Science Society.

    Abstract

    Different linguistic modalities (speech or sign) offer different levels at which signals can iconically represent the world. One hypothesis argues that this iconicity has an effect on how linguistic structure emerges. However, exactly how and why these effects might come about is in need of empirical investigation. In this contribution, we present a signal creation experiment in which both the signalling space and the meaning space are manipulated so that different levels and types of iconicity are available between the signals and meanings. Signals are produced using an infrared sensor that detects the hand position of participants to generate auditory feedback. We find evidence that iconicity may be maladaptive for the discrimination of created signals. Further, we implemented Hidden Markov Models to characterise the structure within signals, which was also used to inform a metric for iconicity.
  • Liu, S., & Zhang, Y. (2019). Why some verbs are harder to learn than others – A micro-level analysis of everyday learning contexts for early verb learning. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2173-2178). Montreal, QB: Cognitive Science Society.

    Abstract

    Verb learning is important for young children. While most
    previous research has focused on linguistic and conceptual
    challenges in early verb learning (e.g. Gentner, 1982, 2006),
    the present paper examined early verb learning at the
    attentional level and quantified the input for early verb learning
    by measuring verb-action co-occurrence statistics in parent-
    child interaction from the learner’s perspective. To do so, we
    used head-mounted eye tracking to record fine-grained
    multimodal behaviors during parent-infant joint play, and
    analyzed parent speech, parent and infant action, and infant
    attention at the moments when parents produced verb labels.
    Our results show great variability across different action verbs,
    in terms of frequency of verb utterances, frequency of
    corresponding actions related to verb meanings, and infants’
    attention to verbs and actions, which provide new insights on
    why some verbs are harder to learn than others.
  • Magyari, L. (2015). Timing turns in conversation: A temporal preparation account. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Mai, F., Galke, L., & Scherp, A. (2019). CBOW is not all you need: Combining CBOW with the compositional matrix space model. In Proceedings of the Seventh International Conference on Learning Representations (ICLR 2019). OpenReview.net.

    Abstract

    Continuous Bag of Words (CBOW) is a powerful text embedding method. Due to its strong capabilities to encode word content, CBOW embeddings perform well on a wide range of downstream tasks while being efficient to compute. However, CBOW is not capable of capturing the word order. The reason is that the computation of CBOW's word embeddings is commutative, i.e., embeddings of XYZ and ZYX are the same. In order to address this shortcoming, we propose a
    learning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words (CMOW). Our algorithm is an adaptation of word2vec, so that it can be trained on large quantities of unlabeled text. We empirically show that CMOW better captures linguistic properties, but it is inferior to CBOW in memorizing word content. Motivated by these findings, we propose a hybrid model that combines the strengths of CBOW and CMOW. Our results show that the hybrid CBOW-CMOW-model retains CBOW's strong ability to memorize word content while at the same time substantially improving its ability to encode other linguistic information by 8%. As a result, the hybrid also performs better on 8 out of 11 supervised downstream tasks with an average improvement of 1.2%.
  • Majid, A. (2015). Comparing lexicons cross-linguistically. In J. R. Taylor (Ed.), The Oxford Handbook of the Word (pp. 364-379). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199641604.013.020.

    Abstract

    The lexicon is central to the concerns of disparate disciplines and has correspondingly elicited conflicting proposals about some of its foundational properties. Some suppose that word meanings and their associated concepts are largely universal, while others note that local cultural interests infiltrate every category in the lexicon. This chapter reviews research in two semantic domains—perception and the body—in order to illustrate crosslinguistic similarities and differences in semantic fields. Data is considered from a wide array of languages, especially those from small-scale indigenous communities which are often overlooked. In every lexical field we find considerable variation across cultures, raising the question of where this variation comes from. Is it the result of different ecological or environmental niches, cultural practices, or accidents of historical pasts? Current evidence suggests that diverse pressures differentially shape lexical fields.
  • Majid, A., & Levinson, S. C. (2007). Language of perception: Overview of field tasks. In A. Majid (Ed.), Field Manual Volume 10 (pp. 8-9). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492898.
  • Majid, A. (2019). Preface. In L. J. Speed, C. O'Meara, L. San Roque, & A. Majid (Eds.), Perception Metaphors (pp. vii-viii). Amsterdam: Benjamins.
  • Majid, A. (2007). Preface and priorities. In A. Majid (Ed.), Field manual volume 10 (pp. 3). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Majid, A., Senft, G., & Levinson, S. C. (2007). The language of olfaction. In A. Majid (Ed.), Field Manual Volume 10 (pp. 36-41). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492910.
  • Majid, A., Senft, G., & Levinson, S. C. (2007). The language of touch. In A. Majid (Ed.), Field Manual Volume 10 (pp. 32-35). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492907.
  • Majid, A., & Levinson, S. C. (2007). The language of vision I: colour. In A. Majid (Ed.), Field Manual Volume 10 (pp. 22-25). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492901.
  • Malaisé, V., Gazendam, L., & Brugman, H. (2007). Disambiguating automatic semantic annotation based on a thesaurus structure. In Proceedings of TALN 2007.
  • Malt, B. C., Gennari, S., Imai, M., Ameel, E., Saji, N., & Majid, A. (2015). Where are the concepts? What words can and can’t reveal. In E. Margolis, & S. Laurence (Eds.), The conceptual Mind: New directions in the study of concepts (pp. 291-326). Cambridge, MA: MIT Press.

    Abstract

    Concepts are so fundamental to human cognition that Fodor declared the heart of a cognitive science to be its theory of concepts. To study concepts, though, cognitive scientists need to be able to identify some. The prevailing assumption has been that they are revealed by words such as triangle, table, and robin. But languages vary dramatically in how they carve up the world with names. Either ordinary concepts must be heavily language dependent, or names cannot be a direct route to concepts. We asked speakers of English, Dutch, Spanish, and Japanese to name a set of 36 video clips of human locomotion and to judge the similarities among them. We investigated what name inventories, name extensions, scaling solutions on name similarity, and scaling solutions on nonlinguistic similarity from the groups, individually and together, suggest about the underlying concepts. Aggregated naming data and similarity solutions converged on results distinct from individual languages.
  • Mamus, E., Rissman, L., Majid, A., & Ozyurek, A. (2019). Effects of blindfolding on verbal and gestural expression of path in auditory motion events. In A. K. Goel, C. M. Seifert, & C. C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2275-2281). Montreal, QB: Cognitive Science Society.

    Abstract

    Studies have claimed that blind people’s spatial representations are different from sighted people, and blind people display superior auditory processing. Due to the nature of auditory and haptic information, it has been proposed that blind people have spatial representations that are more sequential than sighted people. Even the temporary loss of sight—such as through blindfolding—can affect spatial representations, but not much research has been done on this topic. We compared blindfolded and sighted people’s linguistic spatial expressions and non-linguistic localization accuracy to test how blindfolding affects the representation of path in auditory motion events. We found that blindfolded people were as good as sighted people when localizing simple sounds, but they outperformed sighted people when localizing auditory motion events. Blindfolded people’s path related speech also included more sequential, and less holistic elements. Our results indicate that even temporary loss of sight influences spatial representations of auditory motion events
  • Marcoux, K., & Ernestus, M. (2019). Differences between native and non-native Lombard speech in terms of pitch range. In M. Ochmann, M. Vorländer, & J. Fels (Eds.), Proceedings of the ICA 2019 and EAA Euroregio. 23rd International Congress on Acoustics, integrating 4th EAA Euroregio 2019 (pp. 5713-5720). Berlin: Deutsche Gesellschaft für Akustik.

    Abstract

    Lombard speech, speech produced in noise, is acoustically different from speech produced in quiet (plain speech) in several ways, including having a higher and wider F0 range (pitch). Extensive research on native Lombard speech does not consider that non-natives experience a higher cognitive load while producing
    speech and that the native language may influence the non-native speech. We investigated pitch range in plain and Lombard speech in native and non-natives.
    Dutch and American-English speakers read contrastive question-answer pairs in quiet and in noise in English, while the Dutch also read Dutch sentence pairs. We found that Lombard speech is characterized by a wider pitch range than plain speech, for all speakers (native English, non-native English, and native Dutch).
    This shows that non-natives also widen their pitch range in Lombard speech. In sentences with early-focus, we see the same increase in pitch range when going from plain to Lombard speech in native and non-native English, but a smaller increase in native Dutch. In sentences with late-focus, we see the biggest increase for the native English, followed by non-native English and then native Dutch. Together these results indicate an effect of the native language on non-native Lombard speech.
  • Marcoux, K., & Ernestus, M. (2019). Pitch in native and non-native Lombard speech. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 2605-2609). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Lombard speech, speech produced in noise, is
    typically produced with a higher fundamental
    frequency (F0, pitch) compared to speech in quiet. This paper examined the potential differences in native and non-native Lombard speech by analyzing median pitch in sentences with early- or late-focus produced in quiet and noise. We found an increase in pitch in late-focus sentences in noise for Dutch speakers in both English and Dutch, and for American-English speakers in English. These results
    show that non-native speakers produce Lombard speech, despite their higher cognitive load. For the early-focus sentences, we found a difference between the Dutch and the American-English speakers. Whereas the Dutch showed an increased F0 in noise
    in English and Dutch, the American-English speakers did not in English. Together, these results suggest that some acoustic characteristics of Lombard speech, such as pitch, may be language-specific, potentially
    resulting in the native language influencing the non-native Lombard speech.
  • Martin, R. C., & Tan, Y. (2015). Sentence comprehension deficits: Independence and interaction of syntax, semantics, and working memory. In A. E. Hillis (Ed.), Handbook of adult language disorders (2nd ed., pp. 303-327). Boca Raton: CRC Press.
  • Maslowski, M. (2019). Fast speech can sound slow: Effects of contextual speech rate on word recognition. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Massaro, D. W., & Jesse, A. (2007). Audiovisual speech perception and word recognition. In M. G. Gaskell (Ed.), The Oxford handbook of psycholinguistics (pp. 19-35). Oxford: Oxford University Press.

    Abstract

    In most of our everyday conversations, we not only hear but also see each other talk. Our understanding of speech benefits from having the speaker's face present. This finding immediately necessitates the question of how the information from the different perceptual sources is used to reach the best overall decision. This need for processing of multiple sources of information also exists in auditory speech perception, however. Audiovisual speech simply shifts the focus from intramodal to intermodal sources but does not necessitate a qualitatively different form of processing. It is essential that a model of speech perception operationalizes the concept of processing multiple sources of information so that quantitative predictions can be made. This chapter gives an overview of the main research questions and findings unique to audiovisual speech perception and word recognition research as well as what general questions about speech perception and cognition the research in this field can answer. The main theoretical approaches to explain integration and audiovisual speech perception are introduced and critically discussed. The chapter also provides an overview of the role of visual speech as a language learning tool in multimodal training.
  • Matić, D. (2015). Information structure in linguistics. In J. D. Wright (Ed.), The International Encyclopedia of Social and Behavioral Sciences (2nd ed.) Vol. 12 (pp. 95-99). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.53013-X.

    Abstract

    Information structure is a subfield of linguistic research dealing with the ways speakers encode instructions to the hearer on how to process the message relative to their temporary mental states. To this end, sentences are segmented into parts conveying known and yet-unknown information, usually labeled ‘topic’ and ‘focus.’ Many languages have developed specialized grammatical and lexical means of indicating this segmentation.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M. (2007). Eight questions about spoken-word recognition. In M. G. Gaskell (Ed.), The Oxford handbook of psycholinguistics (pp. 37-53). Oxford: Oxford University Press.

    Abstract

    This chapter is a review of the literature in experimental psycholinguistics on spoken word recognition. It is organized around eight questions. 1. Why are psycholinguists interested in spoken word recognition? 2. What information in the speech signal is used in word recognition? 3. Where are the words in the continuous speech stream? 4. Which words did the speaker intend? 5. When, as the speech signal unfolds over time, are the phonological forms of words recognized? 6. How are words recognized? 7. Whither spoken word recognition? 8. Who are the researchers in the field?
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Meyer, A. S. (2019). Key issues and future directions: Towards a comprehensive cognitive architecture for language use. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 85-96). Cambridge, MA: MIT Press.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Merkx, D., Frank, S., & Ernestus, M. (2019). Language learning using speech to image retrieval. In Proceedings of Interspeech 2019 (pp. 1841-1845). doi:10.21437/Interspeech.2019-3067.

    Abstract

    Humans learn language by interaction with their environment and listening to other humans. It should also be possible for computational models to learn language directly from speech but so far most approaches require text. We improve on existing neural network approaches to create visually grounded embeddings for spoken utterances. Using a combination of a multi-layer GRU, importance sampling, cyclic learning rates, ensembling and vectorial self-attention our results show a remarkable increase in image-caption retrieval performance over previous work. Furthermore, we investigate which layers in the model learn to recognise words in the input. We find that deeper network layers are better at encoding word presence, although the final layer has slightly lower performance. This shows that our visually grounded sentence encoder learns to recognise words from the input even though it is not explicitly trained for word recognition.
  • Mitterer, H. (2007). Top-down effects on compensation for coarticulation are not replicable. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1601-1604). Adelaide: Causal Productions.

    Abstract

    Listeners use lexical knowledge to judge what speech sounds they heard. I investigated whether such lexical influences are truly top-down or just reflect a merging of perceptual and lexical constraints. This is achieved by testing whether the lexically determined identity of a phone exerts the appropriate context effects on surrounding phones. The current investigations focuses on compensation for coarticulation in vowel-fricative sequences, where the presence of a rounded vowel (/y/ rather than /i/) leads fricatives to be perceived as /s/ rather than //. This results was consistently found in all three experiments. A vowel was also more likely to be perceived as rounded /y/ if that lead listeners to be perceive words rather than nonwords (Dutch: meny, English id. vs. meni nonword). This lexical influence on the perception of the vowel had, however, no consistent influence on the perception of following fricative.
  • Mitterer, H., & McQueen, J. M. (2007). Tracking perception of pronunciation variation by tracking looks to printed words: The case of word-final /t/. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1929-1932). Dudweiler: Pirrot.

    Abstract

    We investigated perception of words with reduced word-final /t/ using an adapted eyetracking paradigm. Dutch listeners followed spoken instructions to click on printed words which were accompanied on a computer screen by simple shapes (e.g., a circle). Targets were either above or next to their shapes, and the shapes uniquely identified the targets when the spoken forms were ambiguous between words with or without final /t/ (e.g., bult, bump, vs. bul, diploma). Analysis of listeners’ eye-movements revealed, in contrast to earlier results, that listeners use the following segmental context when compensating for /t/-reduction. Reflecting that /t/-reduction is more likely to occur before bilabials, listeners were more likely to look at the /t/-final words if the next word’s first segment was bilabial. This result supports models of speech perception in which prelexical phonological processes use segmental context to modulate word recognition.
  • Mitterer, H. (2007). Behavior reflects the (degree of) reality of phonological features in the brain as well. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 127-130). Dudweiler: Pirrot.

    Abstract

    To assess the reality of phonological features in language processing (vs. language description), one needs to specify the distinctive claims of distinctive-feature theory. Two of the more farreaching claims are compositionality and generalizability. I will argue that there is some evidence for the first and evidence against the second claim from a recent behavioral paradigm. Highlighting the contribution of a behavioral paradigm also counterpoints the use of brain measures as the only way to elucidate what is "real for the brain". The contributions of the speakers exemplify how brain measures can help us to understand the reality of phonological features in language processing. The evidence is, however, not convincing for a) the claim for underspecification of phonological features—which has to deal with counterevidence from behavioral as well as brain measures—, and b) the claim of position independence of phonological features.
  • Moers, C., Janse, E., & Meyer, A. S. (2015). Probabilistic reduction in reading aloud: A comparison of younger and older adults. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetics Association.

    Abstract

    Frequent and predictable words are generally pronounced with less effort and are therefore acoustically more reduced than less frequent or unpredictable words. Local predictability can be operationalised by Transitional Probability (TP), which indicates how likely a word is to occur given its immediate context. We investigated whether and how probabilistic reduction effects on word durations change with adult age when reading aloud content words embedded in sentences. The results showed equally large frequency effects on verb and noun durations for both younger (Mage = 20 years) and older (Mage = 68 years) adults. Backward TP also affected word duration for younger and older adults alike. ForwardTP, however, had no significant effect on word duration in either age group. Our results resemble earlier findings of more robust BackwardTP effects compared to ForwardTP effects. Furthermore, unlike often reported decline in predictive processing with aging, probabilistic reduction effects remain stable across adulthood.
  • Moisik, S. R., Zhi Yun, D. P., & Dediu, D. (2019). Active adjustment of the cervical spine during pitch production compensates for shape: The ArtiVarK study. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 864-868). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    The anterior lordosis of the cervical spine is thought
    to contribute to pitch (fo) production by influencing
    cricoid rotation as a function of larynx height. This
    study examines the matter of inter-individual
    variation in cervical spine shape and whether this has
    an influence on how fo is produced along increasing
    or decreasing scales, using the ArtiVarK dataset,
    which contains real-time MRI pitch production data.
    We find that the cervical spine actively participates in
    fo production, but the amount of displacement
    depends on individual shape. In general, anterior
    spine motion (tending toward cervical lordosis)
    occurs for low fo, while posterior movement (tending
    towards cervical kyphosis) occurs for high fo.
  • Moisik, S. R., & Dediu, D. (2015). Anatomical biasing and clicks: Preliminary biomechanical modelling. In H. Little (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015) Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 8-13). Glasgow: ICPhS.

    Abstract

    It has been observed by several researchers that the Khoisan palate tends to lack a prominent alveolar ridge. A preliminary biomechanical model of click production was created to examine if these sounds might be subject to an anatomical bias associated with alveolar ridge size. Results suggest the bias is plausible, taking the form of decreased articulatory effort and improved volume change characteristics, however, further modelling and experimental research is required to solidify the claim.
  • Morano, L., Ernestus, M., & Ten Bosch, L. (2015). Schwa reduction in low-proficiency L2 speakers: Learning and generalization. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper investigated the learnability and generalizability of French schwa alternation by Dutch low-proficiency second language learners. We trained 40 participants on 24 new schwa words by exposing them equally often to the reduced and full forms of these words. We then assessed participants' accuracy and reaction times to these newly learnt words as well as 24 previously encountered schwa words with an auditory lexical decision task. Our results show learning of the new words in both forms. This suggests that lack of exposure is probably the main cause of learners' difficulties with reduced forms. Nevertheless, the full forms were slightly better recognized than the reduced ones, possibly due to phonetic and phonological properties of the reduced forms. We also observed no generalization to previously encountered words, suggesting that our participants stored both of the learnt word forms and did not create a rule that applies to all schwa words.
  • Mulder, K., Brekelmans, G., & Ernestus, M. (2015). The processing of schwa reduced cognates and noncognates in non-native listeners of English. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    In speech, words are often reduced rather than fully pronounced (e.g., (/ˈsʌmri/ for /ˈsʌməri/, summary). Non-native listeners may have problems in processing these reduced forms, because they have encountered them less often. This paper addresses the question whether this also holds for highly proficient non-natives and for words with similar forms and meanings in the non-natives' mother tongue (i.e., cognates). In an English auditory lexical decision task, natives and highly proficient Dutch non-natives of English listened to cognates and non-cognates that were presented in full or without their post-stress schwa. The data show that highly proficient learners are affected by reduction as much as native speakers. Nevertheless, the two listener groups appear to process reduced forms differently, because non-natives produce more errors on reduced cognates than on non-cognates. While listening to reduced forms, non-natives appear to be hindered by the co-activated lexical representations of cognate forms in their native language.
  • Muysken, P., Hammarström, H., Birchall, J., van Gijn, R., Krasnoukhova, O., & Müller, N. (2015). Linguistic Areas, bottom up or top down? The case of the Guaporé-Mamoré region. In B. Comrie, & L. Golluscio (Eds.), Language Contact and Documentation / Contacto lingüístico y documentación (pp. 205-238). Berlin: De Gruyter.
  • Neger, T. M., Rietveld, T., & Janse, E. (2015). Adult age effects in auditory statistical learning. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Statistical learning plays a key role in language processing, e.g., for speech segmentation. Older adults have been reported to show less statistical learning on the basis of visual input than younger adults. Given age-related changes in perception and cognition, we investigated whether statistical learning is also impaired in the auditory modality in older compared to younger adults and whether individual learning ability is associated with measures of perceptual (i.e., hearing sensitivity) and cognitive functioning in both age groups. Thirty younger and thirty older adults performed an auditory artificial-grammar-learning task to assess their statistical learning ability. In younger adults, perceptual effort came at the cost of processing resources required for learning. Inhibitory control (as indexed by Stroop colornaming performance) did not predict auditory learning. Overall, younger and older adults showed the same amount of auditory learning, indicating that statistical learning ability is preserved over the adult life span.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2019). ERP signal analysis with temporal resolution using a time window bank. In Proceedings of Interspeech 2019 (pp. 1208-1212). doi:10.21437/Interspeech.2019-2729.

    Abstract

    In order to study the cognitive processes underlying speech comprehension, neuro-physiological measures (e.g., EEG and MEG), or behavioural measures (e.g., reaction times and response accuracy) can be applied. Compared to behavioural measures, EEG signals can provide a more fine-grained and complementary view of the processes that take place during the unfolding of an auditory stimulus.

    EEG signals are often analysed after having chosen specific time windows, which are usually based on the temporal structure of ERP components expected to be sensitive to the experimental manipulation. However, as the timing of ERP components may vary between experiments, trials, and participants, such a-priori defined analysis time windows may significantly hamper the exploratory power of the analysis of components of interest. In this paper, we explore a wide-window analysis method applied to EEG signals collected in an auditory repetition priming experiment.

    This approach is based on a bank of temporal filters arranged along the time axis in combination with linear mixed effects modelling. Crucially, it permits a temporal decomposition of effects in a single comprehensive statistical model which captures the entire EEG trace.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2015). Exemplar effects arise in a lexical decision task, but only under adverse listening conditions. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper studies the influence of adverse listening conditions on exemplar effects in priming experiments that do not instruct participants to use their episodic memories. We conducted two lexical decision experiments, in which a prime and a target represented the same word type and could be spoken by the same or a different speaker. In Experiment 1, participants listened to clear speech, and showed no exemplar effects: they recognised repetitions by the same speaker as quickly as different speaker repetitions. In Experiment 2, the stimuli contained noise, and exemplar effects did arise. Importantly, Experiment 1 elicited longer average RTs than Experiment 2, a result that contradicts the time-course hypothesis, according to which exemplars only play a role when processing is slow. Instead, our findings support the hypothesis that exemplar effects arise under adverse listening conditions, when participants are stimulated to use their episodic memories in addition to their mental lexicons.
  • Nijveld, A. (2019). The role of exemplars in speech comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Noordman, L. G. M., Vonk, W., Cozijn, R., & Frank, S. (2015). Causal inferences and world knowledge. In E. J. O'Brien, A. E. Cook, & R. F. Lorch (Eds.), Inferences during reading (pp. 260-289). Cambridge, UK: Cambridge University Press.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.

Share this page