Publications

Displaying 101 - 200 of 264
  • Heilbron, M., Ehinger, B., Hagoort, P., & De Lange, F. P. (2019). Tracking naturalistic linguistic predictions with deep neural language models. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience (pp. 424-427). doi:10.32470/CCN.2019.1096-0.

    Abstract

    Prediction in language has traditionally been studied using
    simple designs in which neural responses to expected
    and unexpected words are compared in a categorical
    fashion. However, these designs have been contested
    as being ‘prediction encouraging’, potentially exaggerating
    the importance of prediction in language understanding.
    A few recent studies have begun to address
    these worries by using model-based approaches to probe
    the effects of linguistic predictability in naturalistic stimuli
    (e.g. continuous narrative). However, these studies
    so far only looked at very local forms of prediction, using
    models that take no more than the prior two words into
    account when computing a word’s predictability. Here,
    we extend this approach using a state-of-the-art neural
    language model that can take roughly 500 times longer
    linguistic contexts into account. Predictability estimates
    fromthe neural network offer amuch better fit to EEG data
    from subjects listening to naturalistic narrative than simpler
    models, and reveal strong surprise responses akin to
    the P200 and N400. These results show that predictability
    effects in language are not a side-effect of simple designs,
    and demonstrate the practical use of recent advances
    in AI for the cognitive neuroscience of language.
  • Hellwig, F. M., & Lüpke, F. (2001). Caused positions. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 126-128). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874644.

    Abstract

    What kinds of resources to languages have for describing location and position? For some languages, verbs have an important role to play in describing different kinds of situations (e.g., whether a bottle is standing or lying on the table). This task is designed to examine the use of positional verbs in locative constructions, with respect to the presence or absence of a human “positioner”. Participants are asked to describe video clips showing locative states that occur spontaneously, or because of active interference from a person. The task follows on from two earlier tools for the elicitation of static locative descriptions (BowPed and the Ameka picture book task). A number of additional variables (e.g. canonical v. non-canonical orientation of the figure) are also targeted in the stimuli set.

    Additional information

    2001_Caused_positions.zip
  • Janse, E. (2001). Comparing word-level intelligibility after linear vs. non-linear time-compression. In Proceedings of the VIIth European Conference on Speech Communication and Technology Eurospeech (pp. 1407-1410).
  • Joo, H., Jang, J., Kim, S., Cho, T., & Cutler, A. (2019). Prosodic structural effects on coarticulatory vowel nasalization in Australian English in comparison to American English. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 835-839). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This study investigates effects of prosodic factors (prominence, boundary) on coarticulatory Vnasalization in Australian English (AusE) in CVN and NVC in comparison to those in American English
    (AmE). As in AmE, prominence was found to
    lengthen N, but to reduce V-nasalization, enhancing N’s nasality and V’s orality, respectively (paradigmatic contrast enhancement). But the prominence effect in CVN was more robust than that in AmE. Again similar to findings in AmE, boundary
    induced a reduction of N-duration and V-nasalization phrase-initially (syntagmatic contrast enhancement), and increased the nasality of both C and V phrasefinally.
    But AusE showed some differences in terms
    of the magnitude of V nasalization and N duration. The results suggest that the linguistic contrast enhancements underlie prosodic-structure modulation of coarticulatory V-nasalization in
    comparable ways across dialects, while the fine phonetic detail indicates that the phonetics-prosody interplay is internalized in the individual dialect’s phonetic grammar.
  • Jordens, P. (1998). Defaultformen des Präteritums. Zum Erwerb der Vergangenheitsmorphologie im Niederlänidischen. In H. Wegener (Ed.), Eine zweite Sprache lernen (pp. 61-88). Tübingen, Germany: Verlag Gunter Narr.
  • Kempen, G., & Vosse, T. (1992). A language-sensitive text editor for Dutch. In P. O’Brian Holt, & N. Williams (Eds.), Computers and writing: State of the art (pp. 68-77). Dordrecht: Kluwer Academic Publishers.

    Abstract

    Modern word processors begin to offer a range of facilities for spelling, grammar and style checking in English. For the Dutch language hardly anything is available as yet. Many commercial word processing packages do include a hyphenation routine and a lexicon-based spelling checker but the practical usefulness of these tools is limited due to certain properties of Dutch orthography, as we will explain below. In this chapter we describe a text editor which incorporates a great deal of lexical, morphological and syntactic knowledge of Dutch and monitors the orthographical quality of Dutch texts. Section 1 deals with those aspects of Dutch orthography which pose problems to human authors as well as to computational language sensitive text editing tools. In section 2 we describe the design and the implementation of the text editor we have built. Section 3 is mainly devoted to a provisional evaluation of the system.
  • Kempen, G. (1996). Computational models of syntactic processing in human language comprehension. In T. Dijkstra, & K. De Smedt (Eds.), Computational psycholinguistics: Symbolic and subsymbolic models of language processing (pp. 192-220). London: Taylor & Francis.
  • Kempen, G. (1996). "De zwoele groei van den zinsbouw": De wonderlijke levende grammatica van Jac. van Ginneken uit De Roman van een Kleuter (1917). Bezorgd en van een nawoord voorzien door Gerard Kempen. In A. Foolen, & J. Noordegraaf (Eds.), De taal is kennis van de ziel: Opstellen over Jac. van Ginneken (1877-1945) (pp. 173-216). Münster: Nodus Publikationen.
  • Kempen, G., & Harbusch, K. (1998). A 'tree adjoining' grammar without adjoining: The case of scrambling in German. In Fourth International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+4).
  • Kempen, G. (1992). Generation. In W. Bright (Ed.), International encyclopedia of linguistics (pp. 59-61). New York: Oxford University Press.
  • Kempen, G. (1996). Human language technology can modernize writing and grammar instruction. In COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 2 (pp. 1005-1006). Stroudsburg, PA: Association for Computational Linguistics.
  • Kempen, G. (1992). Language technology and language instruction: Computational diagnosis of word level errors. In M. Swartz, & M. Yazdani (Eds.), Intelligent tutoring systems for foreign language learning: The bridge to international communication (pp. 191-198). Berlin: Springer.
  • Kempen, G., & Janssen, S. (1996). Omspellen: Reuze(n)karwei of peule(n)schil? In H. Croll, & J. Creutzberg (Eds.), Proceedings of the 5e Dag van het Document (pp. 143-146). Projectbureau Croll en Creutzberg.
  • Kempen, G. (1998). Sentence parsing. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 213-228). Berlin: Springer.
  • Kempen, G. (1992). Second language acquisition as a hybrid learning process. In F. Engel, D. Bouwhuis, T. Bösser, & G. d'Ydewalle (Eds.), Cognitive modelling and interactive environments in language learning (pp. 139-144). Berlin: Springer.
  • Kidd, E., Bavin, E. L., & Rhodes, B. (2001). Two-year-olds' knowledge of verbs and argument structures. In M. Almgren, A. Barreña, M.-J. Ezeuzabarrena, I. Idiazabal, & B. MacWhinney (Eds.), Research on child language acquisition: Proceedings of the 8th Conference of the International Association for the Study of Child language (pp. 1368-1382). Sommerville: Cascadilla Press.
  • Kita, S., Danziger, E., & Stolz, C. (2001). Cultural specificity of spatial schemas, as manifested in spontaneous gestures. In M. Gattis (Ed.), Spatial Schemas and Abstract Thought (pp. 115-146). Cambridge, MA, USA: MIT Press.
  • Kita, S., van Gijn, I., & van der Hulst, H. (1998). Movement phases in signs and co-speech gestures, and their transcription by human coders. In Gesture and Sign-Language in Human-Computer Interaction (Lecture Notes in Artificial Intelligence - LNCS Subseries, Vol. 1371) (pp. 23-35). Berlin, Germany: Springer-Verlag.

    Abstract

    The previous literature has suggested that the hand movement in co-speech gestures and signs consists of a series of phases with qualitatively different dynamic characteristics. In this paper, we propose a syntagmatic rule system for movement phases that applies to both co-speech gestures and signs. Descriptive criteria for the rule system were developed for the analysis video-recorded continuous production of signs and gesture. It involves segmenting a stream of body movement into phases and identifying different phase types. Two human coders used the criteria to analyze signs and cospeech gestures that are produced in natural discourse. It was found that the criteria yielded good inter-coder reliability. These criteria can be used for the technology of automatic recognition of signs and co-speech gestures in order to segment continuous production and identify the potentially meaningbearing phase.
  • Kita, S. (2001). Locally-anchored spatial gestures, version 2: Historical description of the local environment as a gesture elicitation task. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 132-135). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874647.

    Abstract

    Gesture is an integral part of face-to-face communication, and provides a rich area for cross-cultural comparison. “Locally-anchored spatial gestures” are gestures that are roughly oriented to the actual geographical direction of referents. For example, such gestures may point to a location or a thing, trace the shape of a path, or indicate the direction of a particular area. The goal of this task is to elicit locally-anchored spatial gestures across different cultures. The task follows an interview format, where one participant prompts another to talk in detail about a specific area that the main speaker knows well. The data can be used for additional purposes such as the investigation of demonstratives.
  • Kita, S. (2001). Recording recommendations for gesture studies. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 130-131). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Klein, W. (2001). Das Ende vor Augen: Deutsch als Wissenschaftssprache. In F. Debus, F. Kollmann, & U. Pörken (Eds.), Deutsch als Wissenschaftssprache im 20. Jahrhundert (pp. 289-293). Mainz: Akademie der Wissenschaften und der Literatur.
  • Klein, W. (2001). Deiktische Orientierung. In M. Haspelmath, E. König, W. Oesterreicher, & W. Raible (Eds.), Sprachtypologie und sprachliche Universalien: Vol. 1/1 (pp. 575-590). Berlin: de Gruyter.
  • Klein, W. (1992). Der Fall Horten gegen Delius, oder: Der Laie, der Fachmann und das Recht. In G. Grewendorf (Ed.), Rechtskultur als Sprachkultur: Zur forensischen Funktion der Sprachanalyse (pp. 284-313). Frankfurt am Main: Suhrkamp.
  • Klein, W. (1998). Ein Blick zurück auf die Varietätengrammatik. In U. Ammon, K. Mattheier, & P. Nelde (Eds.), Sociolinguistica: Internationales Jahrbuch für europäische Soziolinguistik (pp. 22-38). Tübingen: Niemeyer.
  • Klein, W. (2001). Elementary forms of linguistic organisation. In S. Ward, & J. Trabant (Eds.), The origins of language (pp. 81-102). Berlin: Mouton de Gruyter.
  • Klein, W. (1996). Essentially social: On the origin of linguistic knowledge in the individual. In P. Baltes, & U. Staudinger (Eds.), Interactive minds (pp. 88-107). Cambridge: Cambridge University Press.
  • Klein, W. (2001). Die Linguistik ist anders geworden. In S. Anschütz, S. Kanngießer, & G. Rickheit (Eds.), A Festschrift for Manfred Briegel: Spektren der Linguistik (pp. 51-72). Wiesbaden: Deutscher Universitätsverlag.
  • Klein, W. (1998). Assertion and finiteness. In N. Dittmar, & Z. Penner (Eds.), Issues in the theory of language acquisition: Essays in honor of Jürgen Weissenborn (pp. 225-245). Bern: Peter Lang.
  • Klein, W., & Perdue, C. (1992). Framework. In W. Klein, & C. Perdue (Eds.), Utterance structure: Developing grammars again (pp. 11-59). Amsterdam: Benjamins.
  • Klein, W. (1996). Language acquisition at different ages. In D. Magnusson (Ed.), Individual development over the lifespan: Biological and psychosocial perspectives (pp. 88-108). Cambridge: Cambridge University Press.
  • Klein, W. (2001). Lexicology and lexicography. In N. Smelser, & P. Baltes (Eds.), International encyclopedia of the social & behavioral sciences: Vol. 13 (pp. 8764-8768). Amsterdam: Elsevier Science.
  • Klein, W., & Carroll, M. (1992). The acquisition of German. In W. Klein, & C. Perdue (Eds.), Utterance structure: Developing grammars again (pp. 123-188). Amsterdam: Benjamins.
  • Klein, W. (2001). Second language acquisition. In N. Smelser, & P. Baltes (Eds.), International encyclopedia of the social & behavioral sciences: Vol. 20 (pp. 13768-13771). Amsterdam: Elsevier science.
  • Klein, W., & Vater, H. (1998). The perfect in English and German. In L. Kulikov, & H. Vater (Eds.), Typology of verbal categories: Papers presented to Vladimir Nedjalkov on the occasion of his 70th birthday (pp. 215-235). Tübingen: Niemeyer.
  • Klein, W. (2001). Time and again. In C. Féry, & W. Sternefeld (Eds.), Audiatur vox sapientiae: A festschrift for Arnim von Stechow (pp. 267-286). Berlin: Akademie Verlag.
  • Klein, W. (2001). Typen und Konzepte des Spracherwerbs. In L. Götze, G. Helbig, G. Henrici, & H. Krumm (Eds.), Deutsch als Fremdsprache (pp. 604-616). Berlin: de Gruyter.
  • Kuijpers, C., Van Donselaar, W., & Cutler, A. (1996). Phonological variation: Epenthesis and deletion of schwa in Dutch. In H. T. Bunnell (Ed.), Proceedings of the Fourth International Conference on Spoken Language Processing: Vol. 1 (pp. 94-97). New York: Institute of Electrical and Electronics Engineers.

    Abstract

    Two types of phonological variation in Dutch, resulting from optional rules, are schwa epenthesis and schwa deletion. In a lexical decision experiment it was investigated whether the phonological variants were processed similarly to the standard forms. It was found that the two types of variation patterned differently. Words with schwa epenthesis were processed faster and more accurately than the standard forms, whereas words with schwa deletion led to less fast and less accurate responses. The results are discussed in relation to the role of consonant-vowel alternations in speech processing and the perceptual integrity of onset clusters.
  • Kuijpers, C. T., Coolen, R., Houston, D., & Cutler, A. (1998). Using the head-turning technique to explore cross-linguistic performance differences. In C. Rovee-Collier, L. Lipsitt, & H. Hayne (Eds.), Advances in infancy research: Vol. 12 (pp. 205-220). Stamford: Ablex.
  • Lausberg, H., & Kita, S. (2001). Hemispheric specialization in nonverbal gesticulation investigated in patients with callosal disconnection. In C. Cavé, I. Guaïtella, & S. Santi (Eds.), Oralité et gestualité: Interactions et comportements multimodaux dans la communication. Actes du colloque ORAGE 2001 (pp. 266-270). Paris, France: Éditions L'Harmattan.
  • Lev-Ari, S. (2019). The influence of social network properties on language processing and use. In M. S. Vitevitch (Ed.), Network Science in Cognitive Psychology (pp. 10-29). New York, NY: Routledge.

    Abstract

    Language is a social phenomenon. The author learns, processes, and uses it in social contexts. In other words, the social environment shapes the linguistic knowledge and use of the knowledge. To a degree, this is trivial. A child exposed to Japanese will become fluent in Japanese, whereas a child exposed to only Spanish will not understand Japanese but will master the sounds, vocabulary, and grammar of Spanish. Language is a structured system. Sounds and words do not occur randomly but are characterized by regularities. Learners are sensitive to these regularities and exploit them when learning language. People differ in the sizes of their social networks. Some people tend to interact with only a few people, whereas others might interact with a wide range of people. This is reflected in people’s holiday greeting habits: some people might send cards to only a few people, whereas other would send greeting cards to more than 350 people.
  • Levelt, W. J. M. (2001). The architecture of normal spoken language use. In G. Gupta (Ed.), Cognitive science: Issues and perspectives (pp. 457-473). New Delhi: Icon Publications.
  • Levelt, W. J. M. (1996). Preface. In W. J. M. Levelt (Ed.), Advanced psycholinguistics: A bressanone perspective for Giovanni B. Flores d'Arcais (pp. VII-IX). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levelt, W. J. M. (1996). Foreword. In T. Dijkstra, & K. De Smedt (Eds.), Computational psycholinguistics (pp. ix-xi). London: Taylor & Francis.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (1996). Linguistic intuitions and beyond. In W. J. M. Levelt (Ed.), Advanced psycholinguistics: A Bressanone retrospective for Giovanni B. Floris d'Arcais (pp. 31-35). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levelt, W. J. M. (1996). Perspective taking and ellipsis in spatial descriptions. In P. Bloom, M. A. Peterson, L. Nadel, & M. F. Garrett (Eds.), Language and space (pp. 77-107). Cambridge, MA: MIT Press.
  • Levelt, W. J. M. (1992). Psycholinguistics: An overview. In W. Bright (Ed.), International encyclopedia of linguistics (Vol. 3) (pp. 290-294). Oxford: Oxford University Press.
  • Levelt, W. J. M. (2001). Relations between speech production and speech perception: Some behavioral and neurological observations. In E. Dupoux (Ed.), Language, brain and cognitive development: Essays in honour of Jacques Mehler (pp. 241-256). Cambridge, MA: MIT Press.
  • Levinson, S. C. (1992). Space in Australian Languages Questionnaire. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 29-40). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    This questionnaire is designed to explore how spatial relations are encoded in Australian language, but may be of interest to researchers further afield.
  • Levinson, S. C. (1992). Space in Australian Languages Questionnaire. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 29-40). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3512641.

    Abstract

    This questionnaire is designed to explore how spatial relations are encoded in Australian language, but may be of interest to researchers further afield.
  • Levinson, S. C. (2001). Motion Verb Stimulus (Moverb) version 2. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 9-13). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513706.

    Abstract

    How do languages express ideas of movement, and how do they package different components of this domain, such as manner and path of motion? This task uses one large set of stimuli to gain knowledge of certain key aspects of motion verb meanings in the target language, and expands the investigation beyond simple verbs (e.g., go) to include the semantics of motion predications complete with adjuncts (e.g., go across something). Consultants are asked to view and briefly describe 96 animations of a few seconds each. The task is designed to get linguistic elicitations of motion predications under contrastive comparison with other animations in the same set. Unlike earlier tasks, the stimuli focus on inanimate moving items or “figures” (in this case, a ball).
  • Levinson, S. C. (1992). Activity types and language. In P. Drew, & J. Heritage (Eds.), Talk at work: Interaction in institutional settings (pp. 66-100). Cambridge University Press.
  • Levinson, S. C. (2001). Covariation between spatial language and cognition. In M. Bowerman, & S. C. Levinson (Eds.), Language acquisition and conceptual development (pp. 566-588). Cambridge: Cambridge University Press.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C., Kita, S., & Ozyurek, A. (2001). Demonstratives in context: Comparative handicrafts. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 52-54). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874663.

    Abstract

    Demonstratives (e.g., words such as this and that in English) pivot on relationships between the item being talked about, and features of the speech act situation (e.g., where the speaker and addressee are standing or looking). However, they are only rarely investigated multi-modally, in natural language contexts. This task is designed to build a video corpus of cross-linguistically comparable discourse data for the study of “deixis in action”, while simultaneously supporting the investigation of joint attention as a factor in speaker selection of demonstratives. In the task, two or more speakers are asked to discuss and evaluate a group of similar items (e.g., examples of local handicrafts, tools, produce) that are placed within a relatively defined space (e.g., on a table). The task can additionally provide material for comparison of pointing gesture practices.
  • Levinson, S. C., Bohnemeyer, J., & Enfield, N. J. (2001). “Time and space” questionnaire for “space in thinking” subproject. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 14-20). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    This entry contains: 1. An invitation to think about to what extent the grammar of space and time share lexical and morphosyntactic resources − the suggestions here are only prompts, since it would take a long questionnaire to fully explore this; 2. A suggestion about how to collect gestural data that might show us to what extent the spatial and temporal domains, have a psychological continuity. This is really the goal − but you need to do the linguistic work first or in addition. The goal of this task is to explore the extent to which time is conceptualised on a spatial basis.
  • Levinson, S. C. (1996). Frames of reference and Molyneux's question: Cross-linguistic evidence. In P. Bloom, M. Peterson, L. Nadel, & M. Garrett (Eds.), Language and space (pp. 109-169). Cambridge, MA: MIT press.
  • Levinson, S. C., Brown, P., Danzinger, E., De León, L., Haviland, J. B., Pederson, E., & Senft, G. (1992). Man and Tree & Space Games. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 7-14). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2458804.

    Abstract

    These classic tasks can be used to explore spatial reference in field settings. They provide a language-independent metric for eliciting spatial language, using a “director-matcher” paradigm. The Man and Tree task deals with location on the horizontal plane with both featured (man) and non-featured (e.g., tree) objects. The Space Games depict various objects (e.g. bananas, lemons) and elicit spatial contrasts not obviously lexicalisable in English.
  • Levinson, S. C. (2001). Maxim. In S. Duranti (Ed.), Key terms in language and culture (pp. 139-142). Oxford: Blackwell.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C., & Toni, I. (2019). Key issues and future directions: Interactional foundations of language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 257-261). Cambridge, MA: MIT Press.
  • Levinson, S. C., Enfield, N. J., & Senft, G. (2001). Kinship domain for 'space in thinking' subproject. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 85-88). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874655.
  • Levinson, S. C., & Wittenburg, P. (2001). Language as cultural heritage - Promoting research and public awareness on the Internet. In J. Renn (Ed.), ECHO - An Infrastructure to Bring European Cultural Heritage Online (pp. 104-111). Berlin: Max Planck Institute for the History of Science.

    Abstract

    The ECHO proposal aims to bring to life the cultural heritage of Europe, through internet technology that encourages collaboration across the Humanities disciplines which interpret it – at the same time making all this scholarship accessible to the citizens of Europe. An essential part of the cultural heritage of Europe is the diverse set of languages used on the continent, in their historical, literary and spoken forms. Amongst these are the ‘hidden languages’ used by minorities but of wide interest to the general public. We take the 18 Sign Languages of the EEC – the natural languages of the deaf - as an example. Little comparative information about these is available, despite their special scientific importance, the widespread public interest and the policy implications. We propose a research project on these languages based on placing fully annotated digitized moving images of each of these languages on the internet. This requires significant development of multi-media technology which would allow distributed annotation of a central corpus, together with the development of special search techniques. The technology would have widespread application to all cultural performances recorded as sound plus moving images. Such a project captures in microcosm the essence of the ECHO proposal: cultural heritage is nothing without the humanities research which contextualizes and gives it comparative assessment; by marrying information technology to humanities research, we can bring these materials to a wider public while simultaneously boosting Europe as a research area.
  • Levinson, S. C., Kita, S., & Enfield, N. J. (2001). Locally-anchored narrative. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 147). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874660.

    Abstract

    As for 'Locally-anchored spatial gestures task, version 2', a major goal of this task is to elicit locally-anchored spatial gestures across different cultures. “Locally-anchored spatial gestures” are gestures that are roughly oriented to the actual geographical direction of referents. Rather than set up an interview situation, this task involves recording informal, animated narrative delivered to a native-speaker interlocutor. Locally-anchored gestures produced in such narrative are roughly comparable to those collected in the interview task. The data collected can also be used to investigate a wide range of other topics.
  • Levinson, S. C. (2019). Interactional foundations of language: The interaction engine hypothesis. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 189-200). Cambridge, MA: MIT Press.
  • Levinson, S. C. (1996). Introduction to part II. In J. J. Gumperz, & S. C. Levinson (Eds.), Rethinking linguistic relativity (pp. 133-144). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2019). Natural forms of purposeful interaction among humans: What makes interaction effective? In K. A. Gluck, & J. E. Laird (Eds.), Interactive task learning: Humans, robots, and agents acquiring new tasks through natural interactions (pp. 111-126). Cambridge, MA: MIT Press.
  • Levinson, S. C. (1996). Relativity in spatial conception and description. In J. J. Gumperz, & S. C. Levinson (Eds.), Rethinking linguistic relativity (pp. 177-202). Cambridge University Press.
  • Levinson, S. C. (2001). Space: Linguistic expression. In N. Smelser, & P. Baltes (Eds.), International Encyclopedia of Social and Behavioral Sciences: Vol. 22 (pp. 14749-14752). Oxford: Pergamon.
  • Levinson, S. C. (2001). Place and space in the sculpture of Anthony Gormley - An anthropological perspective. In S. D. McElroy (Ed.), Some of the facts (pp. 68-109). St Ives: Tate Gallery.
  • Levinson, S. C. (2001). Pragmatics. In N. Smelser, & P. Baltes (Eds.), International Encyclopedia of Social and Behavioral Sciences: Vol. 17 (pp. 11948-11954). Oxford: Pergamon.
  • Levinson, S. C., & Enfield, N. J. (2001). Preface and priorities. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 3). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C., & Annamalai, E. (1992). Why presuppositions aren't conventional. In R. N. Srivastava (Ed.), Language and text: Studies in honour of Ashok R. Kelkar (pp. 227-242). Dehli: Kalinga Publications.
  • Levinson, S. C., & Senft, G. (1996). Zur Semantik der Verben INTRARE und EXIRE in verschieden Sprachen. In Jahrbuch der Max-Planck-Gesellschaft 1996 (pp. 340-344). München: Generalverwaltung der Max-Planck-Gesellschaft München.
  • Liu, S., & Zhang, Y. (2019). Why some verbs are harder to learn than others – A micro-level analysis of everyday learning contexts for early verb learning. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2173-2178). Montreal, QB: Cognitive Science Society.

    Abstract

    Verb learning is important for young children. While most
    previous research has focused on linguistic and conceptual
    challenges in early verb learning (e.g. Gentner, 1982, 2006),
    the present paper examined early verb learning at the
    attentional level and quantified the input for early verb learning
    by measuring verb-action co-occurrence statistics in parent-
    child interaction from the learner’s perspective. To do so, we
    used head-mounted eye tracking to record fine-grained
    multimodal behaviors during parent-infant joint play, and
    analyzed parent speech, parent and infant action, and infant
    attention at the moments when parents produced verb labels.
    Our results show great variability across different action verbs,
    in terms of frequency of verb utterances, frequency of
    corresponding actions related to verb meanings, and infants’
    attention to verbs and actions, which provide new insights on
    why some verbs are harder to learn than others.
  • Mai, F., Galke, L., & Scherp, A. (2019). CBOW is not all you need: Combining CBOW with the compositional matrix space model. In Proceedings of the Seventh International Conference on Learning Representations (ICLR 2019). OpenReview.net.

    Abstract

    Continuous Bag of Words (CBOW) is a powerful text embedding method. Due to its strong capabilities to encode word content, CBOW embeddings perform well on a wide range of downstream tasks while being efficient to compute. However, CBOW is not capable of capturing the word order. The reason is that the computation of CBOW's word embeddings is commutative, i.e., embeddings of XYZ and ZYX are the same. In order to address this shortcoming, we propose a
    learning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words (CMOW). Our algorithm is an adaptation of word2vec, so that it can be trained on large quantities of unlabeled text. We empirically show that CMOW better captures linguistic properties, but it is inferior to CBOW in memorizing word content. Motivated by these findings, we propose a hybrid model that combines the strengths of CBOW and CMOW. Our results show that the hybrid CBOW-CMOW-model retains CBOW's strong ability to memorize word content while at the same time substantially improving its ability to encode other linguistic information by 8%. As a result, the hybrid also performs better on 8 out of 11 supervised downstream tasks with an average improvement of 1.2%.
  • Majid, A. (2019). Preface. In L. J. Speed, C. O'Meara, L. San Roque, & A. Majid (Eds.), Perception Metaphors (pp. vii-viii). Amsterdam: Benjamins.
  • Mamus, E., Rissman, L., Majid, A., & Ozyurek, A. (2019). Effects of blindfolding on verbal and gestural expression of path in auditory motion events. In A. K. Goel, C. M. Seifert, & C. C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2275-2281). Montreal, QB: Cognitive Science Society.

    Abstract

    Studies have claimed that blind people’s spatial representations are different from sighted people, and blind people display superior auditory processing. Due to the nature of auditory and haptic information, it has been proposed that blind people have spatial representations that are more sequential than sighted people. Even the temporary loss of sight—such as through blindfolding—can affect spatial representations, but not much research has been done on this topic. We compared blindfolded and sighted people’s linguistic spatial expressions and non-linguistic localization accuracy to test how blindfolding affects the representation of path in auditory motion events. We found that blindfolded people were as good as sighted people when localizing simple sounds, but they outperformed sighted people when localizing auditory motion events. Blindfolded people’s path related speech also included more sequential, and less holistic elements. Our results indicate that even temporary loss of sight influences spatial representations of auditory motion events
  • Marcoux, K., & Ernestus, M. (2019). Differences between native and non-native Lombard speech in terms of pitch range. In M. Ochmann, M. Vorländer, & J. Fels (Eds.), Proceedings of the ICA 2019 and EAA Euroregio. 23rd International Congress on Acoustics, integrating 4th EAA Euroregio 2019 (pp. 5713-5720). Berlin: Deutsche Gesellschaft für Akustik.

    Abstract

    Lombard speech, speech produced in noise, is acoustically different from speech produced in quiet (plain speech) in several ways, including having a higher and wider F0 range (pitch). Extensive research on native Lombard speech does not consider that non-natives experience a higher cognitive load while producing
    speech and that the native language may influence the non-native speech. We investigated pitch range in plain and Lombard speech in native and non-natives.
    Dutch and American-English speakers read contrastive question-answer pairs in quiet and in noise in English, while the Dutch also read Dutch sentence pairs. We found that Lombard speech is characterized by a wider pitch range than plain speech, for all speakers (native English, non-native English, and native Dutch).
    This shows that non-natives also widen their pitch range in Lombard speech. In sentences with early-focus, we see the same increase in pitch range when going from plain to Lombard speech in native and non-native English, but a smaller increase in native Dutch. In sentences with late-focus, we see the biggest increase for the native English, followed by non-native English and then native Dutch. Together these results indicate an effect of the native language on non-native Lombard speech.
  • Marcoux, K., & Ernestus, M. (2019). Pitch in native and non-native Lombard speech. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 2605-2609). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Lombard speech, speech produced in noise, is
    typically produced with a higher fundamental
    frequency (F0, pitch) compared to speech in quiet. This paper examined the potential differences in native and non-native Lombard speech by analyzing median pitch in sentences with early- or late-focus produced in quiet and noise. We found an increase in pitch in late-focus sentences in noise for Dutch speakers in both English and Dutch, and for American-English speakers in English. These results
    show that non-native speakers produce Lombard speech, despite their higher cognitive load. For the early-focus sentences, we found a difference between the Dutch and the American-English speakers. Whereas the Dutch showed an increased F0 in noise
    in English and Dutch, the American-English speakers did not in English. Together, these results suggest that some acoustic characteristics of Lombard speech, such as pitch, may be language-specific, potentially
    resulting in the native language influencing the non-native Lombard speech.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., Norris, D., & Cutler, A. (2001). Can lexical knowledge modulate prelexical representations over time? In R. Smits, J. Kingston, T. Neary, & R. Zondervan (Eds.), Proceedings of the workshop on Speech Recognition as Pattern Classification (SPRAAC) (pp. 145-150). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    The results of a study on perceptual learning are reported. Dutch subjects made lexical decisions on a list of words and nonwords. Embedded in the list were either [f]- or [s]-final words in which the final fricative had been replaced by an ambiguous sound, midway between [f] and [s]. One group of listeners heard ambiguous [f]- final Dutch words like [kara?] (based on karaf, carafe) and unambiguous [s]-final words (e.g., karkas, carcase). A second group heard the reverse (e.g., ambiguous [karka?] and unambiguous karaf). After this training phase, listeners labelled ambiguous fricatives on an [f]- [s] continuum. The subjects who had heard [?] in [f]- final words categorised these fricatives as [f] reliably more often than those who had heard [?] in [s]-final words. These results suggest that speech recognition is dynamic: the system adjusts to the constraints of each particular listening situation. The lexicon can provide this adjustment process with a training signal.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Meyer, A. S. (2019). Key issues and future directions: Towards a comprehensive cognitive architecture for language use. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 85-96). Cambridge, MA: MIT Press.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • McQueen, J. M., & Cutler, A. (1992). Words within words: Lexical statistics and lexical access. In J. Ohala, T. Neary, & B. Derwing (Eds.), Proceedings of the Second International Conference on Spoken Language Processing: Vol. 1 (pp. 221-224). Alberta: University of Alberta.

    Abstract

    This paper presents lexical statistics on the pattern of occurrence of words embedded in other words. We report the results of an analysis of 25000 words, varying in length from two to six syllables, extracted from a phonetically-coded English dictionary (The Longman Dictionary of Contemporary English). Each syllable, and each string of syllables within each word was checked against the dictionary. Two analyses are presented: the first used a complete list of polysyllables, with look-up on the entire dictionary; the second used a sublist of content words, counting only embedded words which were themselves content words. The results have important implications for models of human speech recognition. The efficiency of these models depends, in different ways, on the number and location of words within words.
  • Meira, S., & Levinson, S. C. (2001). Topological tasks: General introduction. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 29-51). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874665.
  • Merkx, D., Frank, S., & Ernestus, M. (2019). Language learning using speech to image retrieval. In Proceedings of Interspeech 2019 (pp. 1841-1845). doi:10.21437/Interspeech.2019-3067.

    Abstract

    Humans learn language by interaction with their environment and listening to other humans. It should also be possible for computational models to learn language directly from speech but so far most approaches require text. We improve on existing neural network approaches to create visually grounded embeddings for spoken utterances. Using a combination of a multi-layer GRU, importance sampling, cyclic learning rates, ensembling and vectorial self-attention our results show a remarkable increase in image-caption retrieval performance over previous work. Furthermore, we investigate which layers in the model learn to recognise words in the input. We find that deeper network layers are better at encoding word presence, although the final layer has slightly lower performance. This shows that our visually grounded sentence encoder learns to recognise words from the input even though it is not explicitly trained for word recognition.
  • Moisik, S. R., Zhi Yun, D. P., & Dediu, D. (2019). Active adjustment of the cervical spine during pitch production compensates for shape: The ArtiVarK study. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 864-868). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    The anterior lordosis of the cervical spine is thought
    to contribute to pitch (fo) production by influencing
    cricoid rotation as a function of larynx height. This
    study examines the matter of inter-individual
    variation in cervical spine shape and whether this has
    an influence on how fo is produced along increasing
    or decreasing scales, using the ArtiVarK dataset,
    which contains real-time MRI pitch production data.
    We find that the cervical spine actively participates in
    fo production, but the amount of displacement
    depends on individual shape. In general, anterior
    spine motion (tending toward cervical lordosis)
    occurs for low fo, while posterior movement (tending
    towards cervical kyphosis) occurs for high fo.
  • Moore, R. K., & Cutler, A. (2001). Constraints on theories of human vs. machine recognition of speech. In R. Smits, J. Kingston, T. Neary, & R. Zondervan (Eds.), Proceedings of the workshop on Speech Recognition as Pattern Classification (SPRAAC) (pp. 145-150). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    The central issues in the study of speech recognition by human listeners (HSR) and of automatic speech recognition (ASR) are clearly comparable; nevertheless the research communities that concern themselves with ASR and HSR are largely distinct. This paper compares the research objectives of the two fields, and attempts to draw informative lessons from one to the other.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2019). ERP signal analysis with temporal resolution using a time window bank. In Proceedings of Interspeech 2019 (pp. 1208-1212). doi:10.21437/Interspeech.2019-2729.

    Abstract

    In order to study the cognitive processes underlying speech comprehension, neuro-physiological measures (e.g., EEG and MEG), or behavioural measures (e.g., reaction times and response accuracy) can be applied. Compared to behavioural measures, EEG signals can provide a more fine-grained and complementary view of the processes that take place during the unfolding of an auditory stimulus.

    EEG signals are often analysed after having chosen specific time windows, which are usually based on the temporal structure of ERP components expected to be sensitive to the experimental manipulation. However, as the timing of ERP components may vary between experiments, trials, and participants, such a-priori defined analysis time windows may significantly hamper the exploratory power of the analysis of components of interest. In this paper, we explore a wide-window analysis method applied to EEG signals collected in an auditory repetition priming experiment.

    This approach is based on a bank of temporal filters arranged along the time axis in combination with linear mixed effects modelling. Crucially, it permits a temporal decomposition of effects in a single comprehensive statistical model which captures the entire EEG trace.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Norris, D., Van Ooijen, B., & Cutler, A. (1992). Speeded detection of vowels and steady-state consonants. In J. Ohala, T. Neary, & B. Derwing (Eds.), Proceedings of the Second International Conference on Spoken Language Processing; Vol. 2 (pp. 1055-1058). Alberta: University of Alberta.

    Abstract

    We report two experiments in which vowels and steady-state consonants served as targets in a speeded detection task. In the first experiment, two vowels were compared with one voiced and once unvoiced fricative. Response times (RTs) to the vowels were longer than to the fricatives. The error rate was higher for the consonants. Consonants in word-final position produced the shortest RTs, For the vowels, RT correlated negatively with target duration. In the second experiment, the same two vowel targets were compared with two nasals. This time there was no significant difference in RTs, but the error rate was still significantly higher for the consonants. Error rate and length correlated negatively for the vowels only. We conclude that RT differences between phonemes are independent of vocalic or consonantal status. Instead, we argue that the process of phoneme detection reflects more finely grained differences in acoustic/articulatory structure within the phonemic repertoire.
  • O'Meara, C., Speed, L. J., San Roque, L., & Majid, A. (2019). Perception Metaphors: A view from diversity. In L. J. Speed, C. O'Meara, L. San Roque, & A. Majid (Eds.), Perception Metaphors (pp. 1-16). Amsterdam: Benjamins.

    Abstract

    Our bodily experiences play an important role in the way that we think and speak. Abstract language is, however, difficult to reconcile with this body-centred view, unless we appreciate the role metaphors play. To explore the role of the senses across semantic domains, we focus on perception metaphors, and examine their realisation across diverse languages, methods, and approaches. To what extent do mappings in perception metaphor adhere to predictions based on our biological propensities; and to what extent is there space for cross-linguistic and cross-cultural variation? We find that while some metaphors have widespread commonality, there is more diversity attested than should be comfortable for universalist accounts.
  • Otake, T., & Cutler, A. (2001). Recognition of (almost) spoken words: Evidence from word play in Japanese. In P. Dalsgaard (Ed.), Proceedings of EUROSPEECH 2001 (pp. 465-468).

    Abstract

    Current models of spoken-word recognition assume automatic activation of multiple candidate words fully or partially compatible with the speech input. We propose that listeners make use of this concurrent activation in word play such as punning. Distortion in punning should ideally involve no more than a minimal contrastive deviation between two words, namely a phoneme. Moreover, we propose that this metric of similarity does not presuppose phonemic awareness on the part of the punster. We support these claims with an analysis of modern and traditional puns in Japanese (in which phonemic awareness in language users is not encouraged by alphabetic orthography). For both data sets, the results support the predictions. Punning draws on basic processes of spokenword recognition, common across languages.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A., & Woll, B. (2019). Language in the visual modality: Cospeech gesture and sign language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 67-83). Cambridge, MA: MIT Press.
  • Ozyurek, A. (2001). What do speech-gesture mismatches reveal about language specific processing? A comparison of Turkish and English. In C. Cavé, I. Guaitella, & S. Santi (Eds.), Oralité et gestualité: Interactions et comportements multimodaux dans la communication: Actes du Colloque ORAGE 2001 (pp. 567-581). Paris: L'Harmattan.
  • Parhammer*, S. I., Ebersberg*, M., Tippmann*, J., Stärk*, K., Opitz, A., Hinger, B., & Rossi, S. (2019). The influence of distraction on speech processing: How selective is selective attention? In Proceedings of Interspeech 2019 (pp. 3093-3097). doi:10.21437/Interspeech.2019-2699.

    Abstract

    -* indicates shared first authorship -
    The present study investigated the effects of selective attention on the processing of morphosyntactic errors in unattended parts of speech. Two groups of German native (L1) speakers participated in the present study. Participants listened to sentences in which irregular verbs were manipulated in three different conditions (correct, incorrect but attested ablaut pattern, incorrect and crosslinguistically unattested ablaut pattern). In order to track fast dynamic neural reactions to the stimuli, electroencephalography was used. After each sentence, participants in Experiment 1 performed a semantic judgement task, which deliberately distracted the participants from the syntactic manipulations and directed their attention to the semantic content of the sentence. In Experiment 2, participants carried out a syntactic judgement task, which put their attention on the critical stimuli. The use of two different attentional tasks allowed for investigating the impact of selective attention on speech processing and whether morphosyntactic processing steps are performed automatically. In Experiment 2, the incorrect attested condition elicited a larger N400 component compared to the correct condition, whereas in Experiment 1 no differences between conditions were found. These results suggest that the processing of morphosyntactic violations in irregular verbs is not entirely automatic but seems to be strongly affected by selective attention.

Share this page