Publications

Displaying 301 - 400 of 567
  • Levelt, W. J. M. (1987). Hochleistung in Millisekunden - Sprechen und Sprache verstehen. In Jahrbuch der Max-Planck-Gesellschaft (pp. 61-77). Göttingen: Vandenhoeck & Ruprecht.
  • Levelt, W. J. M. (1991). Lexical access in speech production: Stages versus cascading. In H. Peters, W. Hulstijn, & C. Starkweather (Eds.), Speech motor control and stuttering (pp. 3-10). Amsterdam: Excerpta Medica.
  • Levelt, W. J. M., & d'Arcais, F. (1987). Snelheid en uniciteit bij lexicale toegang. In H. Crombag, L. Van der Kamp, & C. Vlek (Eds.), De psychologie voorbij: Ontwikkelingen rond model, metriek en methode in de gedragswetenschappen (pp. 55-68). Lisse: Swets & Zeitlinger.
  • Levelt, W. J. M., & Schriefers, H. (1987). Stages of lexical access. In G. A. Kempen (Ed.), Natural language generation: new results in artificial intelligence, psychology and linguistics (pp. 395-404). Dordrecht: Nijhoff.
  • Levelt, W. J. M. (1986). Zur sprachlichen Abbildung des Raumes: Deiktische und intrinsische Perspektive. In H. Bosshardt (Ed.), Perspektiven auf Sprache. Interdisziplinäre Beiträge zum Gedenken an Hans Hörmann (pp. 187-211). Berlin: De Gruyter.
  • Levinson, S. C. (2004). Significados presumibles [Spanish translation of Presumptive meanings]. Madrid: Bibliotheca Románica Hispánica.
  • Levinson, S. C. (2007). Pragmática [Portugese translation of 'Pragmatics', 1983]. Sao Paulo: Martins Fontes Editora.

    Abstract

    The purpose of this book is to provide some indication of the scope of linguistic pragmatics. First the historical origin of the term pragmatics will be briefly summarized, in order to indicate some usages of the term that are divergent from the usage in this book. We will review some definitions of the field, which, while being less than fully statisfactory, will at least serve to indicate the rough scope of linguistic pragmatics. Thirdly, some reasons for the current interest in the field will be explained, while a final section illustrates some basic kinds of pragmatic phenomena. In passing, some analytical notions that are useful background will be introduced.
  • Levinson, S. C. (2007). Optimizing person reference - perspectives from usage on Rossel Island. In N. Enfield, & T. Stivers (Eds.), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 29-72). Cambridge: Cambridge University Press.

    Abstract

    This chapter explicates the requirement in person–reference for balancing demands for recognition, minimalization, explicitness and indirection. This is illustrated with reference to data from repair of failures of person–reference within a particular linguistic/cultural context, namely casual interaction among Rossel Islanders. Rossel Island (PNG) offers a ‘natural experiment’ for studying aspects of person reference, because of a number of special properties: 1. It is a closed universe of 4000 souls, sharing one kinship network, so in principle anyone could be recognizable from a reference. As a result no (complex) descriptions (cf. ‘ the author of Waverly’) are employed. 2. Names, however, are never uniquely referring, since they are drawn from a fixed pool. They are only used for about 25% of initial references, another 25% of initial references being done by kinship triangulation (‘that man’s father–in–law’). Nearly 50% of initial references are semantically underspecified or vague (e.g. ‘that girl’). 3. There are systematic motivations for oblique reference, e.g. kinship–based taboos and other constraints, which partly account for the underspecified references. The ‘natural experiment’ thus reveals some gneral lessons about how person–reference requires optimizing multiple conflicting constraints. Comparison with Sacks and Schegloff’s (1979) treatment of English person reference suggests a way to tease apart the universal and the culturally–particular.
  • Levinson, S. C. (1997). Contextualizing 'contextualization cues'. In S. Eerdmans, C. Prevignano, & P. Thibault (Eds.), Discussing communication analysis 1: John J. Gumperz (pp. 24-30). Lausanne: Beta Press.
  • Levinson, S. C. (1997). Deixis. In P. V. Lamarque (Ed.), Concise encyclopedia of philosophy of language (pp. 214-219). Oxford: Elsevier.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (1991). Deixis. In W. Bright (Ed.), Oxford international encyclopedia of linguistics (pp. 343-344). Oxford University Press.
  • Levinson, S. C., Cutfield, S., Dunn, M., Enfield, N. J., & Meira, S. (Eds.). (2018). Demonstratives in cross-linguistic perspective. Cambridge: Cambridge University Press.

    Abstract

    Demonstratives play a crucial role in the acquisition and use of language. Bringing together a team of leading scholars this detailed study, a first of its kind, explores meaning and use across fifteen typologically and geographically unrelated languages to find out what cross-linguistic comparisons and generalizations can be made, and how this might challenge current theory in linguistics, psychology, anthropology and philosophy. Using a shared experimental task, rounded out with studies of natural language use, specialists in each of the languages undertook extensive fieldwork for this comparative study of semantics and usage. An introduction summarizes the shared patterns and divergences in meaning and use that emerge.
  • Levinson, S. C., Senft, G., & Majid, A. (2007). Emotion categories in language and thought. In A. Majid (Ed.), Field Manual Volume 10 (pp. 46-52). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492892.
  • Levinson, S. C. (2004). Deixis. In L. Horn (Ed.), The handbook of pragmatics (pp. 97-121). Oxford: Blackwell.
  • Levinson, S. C. (2007). Imi no suitei [Japanese translation of 'Presumptive meanings', 2000]. Tokyo: Kenkyusha.

    Abstract

    When we speak, we mean more than we say. In this book, the author explains some general processes that underlie presumptions in communication. This is the first extended discussion of preferred interpretation in language understanding, integrating much of the best research in linguistic pragmatics from the last two decades. Levinson outlines a theory of presumptive meanings, or preferred interpretations, governing the use of language, building on the idea of implicature developed by the philosopher H. P. Grice. Some of the indirect information carried by speech is presumed by default because it is carried by general principles, rather than inferred from specific assumptions about intention and context. Levinson examines this class of general pragmatic inferences in detail, showing how they apply to a wide range of linguistic constructions. This approach has radical consequences for how we think about language and communication.
  • Levinson, S. C. (1997). From outer to inner space: Linguistic categories and non-linguistic thinking. In J. Nuyts, & E. Pederson (Eds.), Language and conceptualization (pp. 13-45). Cambridge University Press.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (1987). Minimization and conversational inference. In M. Bertuccelli Papi, & J. Verschueren (Eds.), The pragmatic perspective: Selected papers from the 1985 International Pragmatics Conference (pp. 61-129). Benjamins.
  • Levinson, S. C., Majid, A., & Enfield, N. J. (2007). Language of perception: The view from language and culture. In A. Majid (Ed.), Field Manual Volume 10 (pp. 10-21). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.468738.
  • Levinson, S. C. (2017). Living with Manny's dangerous idea. In G. Raymond, G. H. Lerner, & J. Heritage (Eds.), Enabling human conduct: Studies of talk-in-interaction in honor of Emanuel A. Schegloff (pp. 327-349). Amsterdam: Benjamins.
  • Levinson, S. C. (2018). Introduction: Demonstratives: Patterns in diversity. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 1-42). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2017). Speech acts. In Y. Huang (Ed.), Oxford handbook of pragmatics (pp. 199-216). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199697960.013.22.

    Abstract

    The essential insight of speech act theory was that when we use language, we perform actions—in a more modern parlance, core language use in interaction is a form of joint action. Over the last thirty years, speech acts have been relatively neglected in linguistic pragmatics, although important work has been done especially in conversation analysis. Here we review the core issues—the identifying characteristics, the degree of universality, the problem of multiple functions, and the puzzle of speech act recognition. Special attention is drawn to the role of conversation structure, probabilistic linguistic cues, and plan or sequence inference in speech act recognition, and to the centrality of deep recursive structures in sequences of speech acts in conversation

    Files private

    Request files
  • Levinson, S. C., Pederson, E., & Senft, G. (1997). Sprache und menschliche Orientierungsfähigkeiten. In Jahrbuch der Max-Planck-Gesellschaft (pp. 322-327). München: Generalverwaltung der Max-Planck-Gesellschaft.
  • Levinson, S. C., & Majid, A. (2007). The language of sound. In A. Majid (Ed.), Field Manual Volume 10 (pp. 29-31). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.468735.
  • Levinson, S. C., & Majid, A. (2007). The language of vision II: Shape. In A. Majid (Ed.), Field Manual Volume 10 (pp. 26-28). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.468732.
  • Levinson, S. C. (2018). Yélî Dnye: Demonstratives in the language of Rossel Island, Papua New Guinea. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 318-342). Cambridge: Cambridge University Press.
  • Lindström, E., Terrill, A., Reesink, G., & Dunn, M. (2007). The languages of Island Melanesia. In J. S. Friedlaender (Ed.), Genes, language, and culture history in the Southwest Pacific (pp. 118-140). Oxford: Oxford University Press.

    Abstract

    This chapter provides an overview of the Papuan and the Oceanic languages (a branch of Austronesian) in Northern Island Melanesia, as well as phenomena arising through contact between these groups. It shows how linguistics can contribute to the understanding of the history of languages and speakers, and what the findings of those methods have been. The location of the homeland of speakers of Proto-Oceanic is indicated (in northeast New Britain); many facets of the lives of those speakers are shown; and the patterns of their subsequent spread across Island Melanesia and beyond into Remote Oceania are indicated, followed by a second wave overlaying the first into New Guinea and as far as halfway through the Solomon Islands. Regarding the Papuan languages of this region, at least some are older than the 6,000-10,000 ceiling of the Comparative Method, and their relations are explored with the aid of a database of 125 non-lexical structural features. The results reflect archipelago-based clustering with the Central Solomons Papuan languages forming a clade either with the Bismarcks or with Bougainville languages. Papuan languages in Bougainville are less influenced by Oceanic languages than those in the Bismarcks and the Solomons. The chapter considers a variety of scenarios to account for their findings, concluding that the results are compatible with multiple pre-Oceanic waves of arrivals into the area after initial settlement.
  • Lindström, E. (2004). Melanesian kinship and culture. In A. Majid (Ed.), Field Manual Volume 9 (pp. 70-73). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1552190.
  • Liszkowski, U. (2007). Human twelve-month-olds point cooperatively to share interest with and helpfully provide information for a communicative partner. In K. Liebal, C. Müller, & S. Pika (Eds.), Gestural communication in nonhuman and human primates (pp. 124-140). Amsterdam: Benjamins.

    Abstract

    This paper investigates infant pointing at 12 months. Three recent experimental studies from our lab are reported and contrasted with existing accounts on infant communicative and social-cognitive abilities. The new results show that infant pointing at 12 months already is a communicative act which involves the intentional transmission of information to share interest with, or provide information for other persons. It is argued that infant pointing is an inherently social and cooperative act which is used to share psychological relations between interlocutors and environment, repairs misunderstandings in proto-conversational turn-taking, and helps others by providing information. Infant pointing builds on an understanding of others as persons with attentional states and attitudes. Findings do not support lean accounts on early infant pointing which posit that it is initially non-communicative, does not serve the function of indicating, or is purely self-centered. It is suggested to investigate the emergence of reference and the motivation to jointly engage with others also before pointing has emerged.
  • Liszkowski, U., & Brown, P. (2007). Infant pointing (9-15 months) in different cultures. In A. Majid (Ed.), Field Manual Volume 10 (pp. 82-88). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492895.

    Abstract

    There are two tasks for conducting systematic observation of child-caregiver joint attention interactions. Task 1 – a “decorated room” designed to elicit infant and caregiver pointing. Task 2 – videotaped interviews about infant pointing behaviour. The goal of this task is to document the ontogenetic emergence of referential communication in caregiver infant interaction in different cultures, during the critical age of 8-15 months when children come to understand and share others’ intentions. This is of interest to all students of interaction and human communication; it does not require specialist knowledge of children.
  • Little, H., Perlman, M., & Eryilmaz, K. (2017). Repeated interactions can lead to more iconic signals. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 760-765). Austin, TX: Cognitive Science Society.

    Abstract

    Previous research has shown that repeated interactions can cause iconicity in signals to reduce. However, data from several recent studies has shown the opposite trend: an increase in iconicity as the result of repeated interactions. Here, we discuss whether signals may become less or more iconic as a result of the modality used to produce them. We review several recent experimental results before presenting new data from multi-modal signals, where visual input creates audio feedback. Our results show that the growth in iconicity present in the audio information may come at a cost to iconicity in the visual information. Our results have implications for how we think about and measure iconicity in artificial signalling experiments. Further, we discuss how iconicity in real world speech may stem from auditory, kinetic or visual information, but iconicity in these different modalities may conflict.
  • Little, H. (Ed.). (2017). Special Issue on the Emergence of Sound Systems [Special Issue]. The Journal of Language Evolution, 2(1).
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., Nijhof, A., & Willems, R. M. (2018). The Narrative Brain Dataset (NBD), an fMRI dataset for the study of natural language processing in the brain. In B. Devereux, E. Shutova, & C.-R. Huang (Eds.), Proceedings of LREC 2018 Workshop "Linguistic and Neuro-Cognitive Resources (LiNCR) (pp. 8-11). Paris: LREC.

    Abstract

    We present the Narrative Brain Dataset, an fMRI dataset that was collected during spoken presentation of short excerpts of three
    stories in Dutch. Together with the brain imaging data, the dataset contains the written versions of the stimulation texts. The texts are
    accompanied with stochastic (perplexity and entropy) and semantic computational linguistic measures. The richness and unconstrained
    nature of the data allows the study of language processing in the brain in a more naturalistic setting than is common for fMRI studies.
    We hope that by making NBD available we serve the double purpose of providing useful neural data to researchers interested in natural
    language processing in the brain and to further stimulate data sharing in the field of neuroscience of language.
  • Lupyan, G., Wendorf, A., Berscia, L. M., & Paul, J. (2018). Core knowledge or language-augmented cognition? The case of geometric reasoning. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 252-254). Toruń, Poland: NCU Press. doi:10.12775/3991-1.062.
  • Mai, F., Galke, L., & Scherp, A. (2018). Using deep learning for title-based semantic subject indexing to reach competitive performance to full-text. In J. Chen, M. A. Gonçalves, J. M. Allen, E. A. Fox, M.-Y. Kan, & V. Petras (Eds.), JCDL '18: Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries (pp. 169-178). New York: ACM.

    Abstract

    For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.4%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.
  • Majid, A., Van Staden, M., & Enfield, N. J. (2004). The human body in cognition, brain, and typology. In K. Hovie (Ed.), Forum Handbook, 4th International Forum on Language, Brain, and Cognition - Cognition, Brain, and Typology: Toward a Synthesis (pp. 31-35). Sendai: Tohoku University.

    Abstract

    The human body is unique: it is both an object of perception and the source of human experience. Its universality makes it a perfect resource for asking questions about how cognition, brain and typology relate to one another. For example, we can ask how speakers of different languages segment and categorize the human body. A dominant view is that body parts are “given” by visual perceptual discontinuities, and that words are merely labels for these visually determined parts (e.g., Andersen, 1978; Brown, 1976; Lakoff, 1987). However, there are problems with this view. First it ignores other perceptual information, such as somatosensory and motoric representations. By looking at the neural representations of sesnsory representations, we can test how much of the categorization of the human body can be done through perception alone. Second, we can look at language typology to see how much universality and variation there is in body-part categories. A comparison of a range of typologically, genetically and areally diverse languages shows that the perceptual view has only limited applicability (Majid, Enfield & van Staden, in press). For example, using a “coloring-in” task, where speakers of seven different languages were given a line drawing of a human body and asked to color in various body parts, Majid & van Staden (in prep) show that languages vary substantially in body part segmentation. For example, Jahai (Mon-Khmer) makes a lexical distinction between upper arm, lower arm, and hand, but Lavukaleve (Papuan Isolate) has just one word to refer to arm, hand, and leg. This shows that body part categorization is not a straightforward mapping of words to visually determined perceptual parts.
  • Majid, A., & Enfield, N. J. (2017). Body. In H. Burkhardt, J. Seibt, G. Imaguire, & S. Gerogiorgakis (Eds.), Handbook of mereology (pp. 100-103). Munich: Philosophia.
  • Majid, A. (2018). Cultural factors shape olfactory language [Reprint]. In D. Howes (Ed.), Senses and Sensation: Critical and Primary Sources. Volume 3 (pp. 307-310). London: Bloomsbury Publishing.
  • Majid, A., & Bowerman, M. (Eds.). (2007). Cutting and breaking events: A crosslinguistic perspective [Special Issue]. Cognitive Linguistics, 18(2).

    Abstract

    This special issue of Cognitive Linguistics explores the linguistic encoding of events of cutting and breaking. In this article we first introduce the project on which it is based by motivating the selection of this conceptual domain, presenting the methods of data collection used by all the investigators, and characterizing the language sample. We then present a new approach to examining crosslinguistic similarities and differences in semantic categorization. Applying statistical modeling to the descriptions of cutting and breaking events elicited from speakers of all the languages, we show that although there is crosslinguistic variation in the number of distinctions made and in the placement of category boundaries, these differences take place within a strongly constrained semantic space: across languages, there is a surprising degree of consensus on the partitioning of events in this domain. In closing, we compare our statistical approach with more conventional semantic analyses, and show how an extensional semantic typological approach like the one illustrated here can help illuminate the intensional distinctions made by languages.
  • Majid, A., Van Staden, M., Boster, J. S., & Bowerman, M. (2004). Event categorization: A cross-linguistic perspective. In K. Forbus, D. Gentner, & T. Tegier (Eds.), Proceedings of the 26th Annual Meeting of the Cognitive Science Society (pp. 885-890). Mahwah, NJ: Erlbaum.

    Abstract

    Many studies in cognitive science address how people categorize objects, but there has been comparatively little research on event categorization. This study investigated the categorization of events involving material destruction, such as “cutting” and “breaking”. Speakers of 28 typologically, genetically, and areally diverse languages described events shown in a set of video-clips. There was considerable cross-linguistic agreement in the dimensions along which the events were distinguished, but there was variation in the number of categories and the placement of their boundaries.
  • Majid, A. (Ed.). (2007). Field manual volume 10. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Majid, A. (Ed.). (2004). Field manual volume 9. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Majid, A. (2018). Language and cognition. In H. Callan (Ed.), The International Encyclopedia of Anthropology. Hoboken: John Wiley & Sons Ltd.

    Abstract

    What is the relationship between the language we speak and the way we think? Researchers working at the interface of language and cognition hope to understand the complex interplay between linguistic structures and the way the mind works. This is thorny territory in anthropology and its closely allied disciplines, such as linguistics and psychology.

    Additional information

    home page encyclopedia
  • Majid, A., & Levinson, S. C. (2007). Language of perception: Overview of field tasks. In A. Majid (Ed.), Field Manual Volume 10 (pp. 8-9). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492898.
  • Majid, A., Manko, P., & De Valk, J. (2017). Language of the senses. In S. Dekker (Ed.), Scientific breakthroughs in the classroom! (pp. 40-76). Nijmegen: Science Education Hub Radboud University.

    Abstract

    The project that we describe in this chapter has the theme ‘Language of the senses’. This theme is
    based on the research of Asifa Majid and her team regarding the influence of language and culture on
    sensory perception. The chapter consists of two sections. Section 2.1 describes how different sensory
    perceptions are spoken of in different languages. Teachers can use this section as substantive preparation
    before they launch this theme in the classroom. Section 2.2 describes how teachers can handle
    this theme in accordance with the seven phases of inquiry-based learning. Chapter 1, in which the
    general guideline of the seven phases is described, forms the basis for this. We therefore recommend
    the use of chapter 1 as the starting point for the execution of a project in the classroom. This chapter
    provides the thematic additions.

    Additional information

    Materials Language of the senses
  • Majid, A., Manko, P., & de Valk, J. (2017). Taal der Zintuigen. In S. Dekker, & J. Van Baren-Nawrocka (Eds.), Wetenschappelijke doorbraken de klas in! Molecuulbotsingen, Stress en Taal der Zintuigen (pp. 128-166). Nijmegen: Wetenschapsknooppunt Radboud Universiteit.

    Abstract

    Taal der zintuigen gaat over de invloed van taal en cultuur op zintuiglijke waarnemingen. Hoe omschrijf je wat je ziet, voelt, proeft of ruikt? In sommige culturen zijn er veel verschillende woorden voor kleur, in andere culturen juist weer heel weinig. Worden we geboren met deze verschillende kleurgroepen? En bepaalt hoe je ergens over praat ook wat je waarneemt?
  • Majid, A. (2007). Preface and priorities. In A. Majid (Ed.), Field manual volume 10 (pp. 3). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Majid, A., Senft, G., & Levinson, S. C. (2007). The language of olfaction. In A. Majid (Ed.), Field Manual Volume 10 (pp. 36-41). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492910.
  • Majid, A., Senft, G., & Levinson, S. C. (2007). The language of touch. In A. Majid (Ed.), Field Manual Volume 10 (pp. 32-35). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492907.
  • Majid, A., & Levinson, S. C. (2007). The language of vision I: colour. In A. Majid (Ed.), Field Manual Volume 10 (pp. 22-25). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492901.
  • Malaisé, V., Gazendam, L., & Brugman, H. (2007). Disambiguating automatic semantic annotation based on a thesaurus structure. In Proceedings of TALN 2007.
  • Mamus, E., & Karadöller, D. Z. (2018). Anıları Zihinde Canlandırma [Imagery in autobiographical memories]. In S. Gülgöz, B. Ece, & S. Öner (Eds.), Hayatı Hatırlamak: Otobiyografik Belleğe Bilimsel Yaklaşımlar [Remembering Life: Scientific Approaches to Autobiographical Memory] (pp. 185-200). Istanbul, Turkey: Koç University Press.
  • Mani, N., Mishra, R. K., & Huettig, F. (Eds.). (2018). The interactive mind: Language, vision and attention. Chennai: Macmillan Publishers India.
  • Mani, N., Mishra, R. K., & Huettig, F. (2018). Introduction to 'The Interactive Mind: Language, Vision and Attention'. In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 1-2). Chennai: Macmillan Publishers India.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). Whether long-term tracking of speech rate affects perception depends on who is talking. In Proceedings of Interspeech 2017 (pp. 586-590). doi:10.21437/Interspeech.2017-1517.

    Abstract

    Speech rate is known to modulate perception of temporally ambiguous speech sounds. For instance, a vowel may be perceived as short when the immediate speech context is slow, but as long when the context is fast. Yet, effects of long-term tracking of speech rate are largely unexplored. Two experiments tested whether long-term tracking of rate influences perception of the temporal Dutch vowel contrast /ɑ/-/a:/. In Experiment 1, one low-rate group listened to 'neutral' rate speech from talker A and to slow speech from talker B. Another high-rate group was exposed to the same neutral speech from A, but to fast speech from B. Between-group comparison of the 'neutral' trials revealed that the low-rate group reported a higher proportion of /a:/ in A's 'neutral' speech, indicating that A sounded faster when B was slow. Experiment 2 tested whether one's own speech rate also contributes to effects of long-term tracking of rate. Here, talker B's speech was replaced by playback of participants' own fast or slow speech. No evidence was found that one's own voice affected perception of talker A in larger speech contexts. These results carry implications for our understanding of the mechanisms involved in rate-dependent speech perception and of dialogue.
  • Massaro, D. W., & Jesse, A. (2007). Audiovisual speech perception and word recognition. In M. G. Gaskell (Ed.), The Oxford handbook of psycholinguistics (pp. 19-35). Oxford: Oxford University Press.

    Abstract

    In most of our everyday conversations, we not only hear but also see each other talk. Our understanding of speech benefits from having the speaker's face present. This finding immediately necessitates the question of how the information from the different perceptual sources is used to reach the best overall decision. This need for processing of multiple sources of information also exists in auditory speech perception, however. Audiovisual speech simply shifts the focus from intramodal to intermodal sources but does not necessitate a qualitatively different form of processing. It is essential that a model of speech perception operationalizes the concept of processing multiple sources of information so that quantitative predictions can be made. This chapter gives an overview of the main research questions and findings unique to audiovisual speech perception and word recognition research as well as what general questions about speech perception and cognition the research in this field can answer. The main theoretical approaches to explain integration and audiovisual speech perception are introduced and critically discussed. The chapter also provides an overview of the role of visual speech as a language learning tool in multimodal training.
  • Matsuo, A. (2004). Young children's understanding of ongoing vs. completion in present and perfective participles. In J. v. Kampen, & S. Baauw (Eds.), Proceedings of GALA 2003 (pp. 305-316). Utrecht: Netherlands Graduate School of Linguistics (LOT).
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cutler, A. (1997). Cognitive processes in speech perception. In W. J. Hardcastle, & J. D. Laver (Eds.), The handbook of phonetic sciences (pp. 556-585). Oxford: Blackwell.
  • McQueen, J. M. (2007). Eight questions about spoken-word recognition. In M. G. Gaskell (Ed.), The Oxford handbook of psycholinguistics (pp. 37-53). Oxford: Oxford University Press.

    Abstract

    This chapter is a review of the literature in experimental psycholinguistics on spoken word recognition. It is organized around eight questions. 1. Why are psycholinguists interested in spoken word recognition? 2. What information in the speech signal is used in word recognition? 3. Where are the words in the continuous speech stream? 4. Which words did the speaker intend? 5. When, as the speech signal unfolds over time, are the phonological forms of words recognized? 6. How are words recognized? 7. Whither spoken word recognition? 8. Who are the researchers in the field?
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Merkx, D., & Scharenborg, O. (2018). Articulatory feature classification using convolutional neural networks. In Proceedings of Interspeech 2018 (pp. 2142-2146). doi:10.21437/Interspeech.2018-2275.

    Abstract

    The ultimate goal of our research is to improve an existing speech-based computational model of human speech recognition on the task of simulating the role of fine-grained phonetic information in human speech processing. As part of this work we are investigating articulatory feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Different approaches have been used to build AF classifiers, most notably multi-layer perceptrons. Recently, deep neural networks have been applied to the task of AF classification. This paper aims to improve AF classification by investigating two different approaches: 1) investigating the usefulness of a deep Convolutional neural network (CNN) for AF classification; 2) integrating the Mel filtering operation into the CNN architecture. The results showed a remarkable improvement in classification accuracy of the CNNs over state-of-the-art AF classification results for Dutch, most notably in the minority classes. Integrating the Mel filtering operation into the CNN architecture did not further improve classification performance.
  • Meyer, A. S., Wheeldon, L. R., & Krott, A. (Eds.). (2007). Automaticity and control in language processing. Hove: Psychology Press.

    Abstract

    The use of language is a fundamental component of much of our day-to-day life. Language often co-occurs with other activities with which it must be coordinated. This raises the question of whether the cognitive processes involved in planning spoken utterances and in understanding them are autonomous or whether they are affected by, and perhaps affect, non-linguistic cognitive processes, with which they might share processing resources. This question is the central concern of Automaticity and Control in Language Processing. The chapters address key issues concerning the relationship between linguistic and non-linguistic processes, including: * How can the degree of automaticity of a component be defined? * Which linguistic processes are truly automatic, and which require processing capacity? * Through which mechanisms can control processes affect linguistic performance? How might these mechanisms be represented in the brain? * How do limitations in working memory and executive control capacity affect linguistic performance and language re-learning in persons with brain damage? This important collection from leading international researchers will be of great interest to researchers and students in the area.
  • Meyer, A. S. (2004). The use of eye tracking in studies of sentence generation. In J. M. Henderson, & F. Ferreira (Eds.), The interface of language, vision, and action: Eye movements and the visual world (pp. 191-212). Hove: Psychology Press.
  • Micklos, A., Macuch Silva, V., & Fay, N. (2018). The prevalence of repair in studies of language evolution. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 316-318). Toruń, Poland: NCU Press. doi:10.12775/3991-1.075.
  • Miedema, J., & Reesink, G. (2004). One head, many faces: New perspectives on the bird's head Peninsula of New Guinea. Leiden: KITLV Press.

    Abstract

    Wider knowledge of New Guinea's Bird's Head Peninsula, home to an indigenous population of 114,000 people who share more than twenty languages, was recently gained through an extensive interdisciplinary research project involving anthropologists, archaeologists, botanists, demographers, geologists, linguists, and specialists in public administration. Analyzing the findings of the project, this book provides a systematic comparison with earlier studies, addressing the geological past, the latest archaeological evidence of early human habitation (dating back at least 26,000 years), and the region's diversity of languages and cultures. The peninsula is an important transitional area between Southeast Asia and Oceania, and this book provides valuable new insights for specialists in both the social and natural sciences into processes of state formation and globalization in the Asia-Pacific zone.
  • Mitterer, H. (2007). Top-down effects on compensation for coarticulation are not replicable. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1601-1604). Adelaide: Causal Productions.

    Abstract

    Listeners use lexical knowledge to judge what speech sounds they heard. I investigated whether such lexical influences are truly top-down or just reflect a merging of perceptual and lexical constraints. This is achieved by testing whether the lexically determined identity of a phone exerts the appropriate context effects on surrounding phones. The current investigations focuses on compensation for coarticulation in vowel-fricative sequences, where the presence of a rounded vowel (/y/ rather than /i/) leads fricatives to be perceived as /s/ rather than //. This results was consistently found in all three experiments. A vowel was also more likely to be perceived as rounded /y/ if that lead listeners to be perceive words rather than nonwords (Dutch: meny, English id. vs. meni nonword). This lexical influence on the perception of the vowel had, however, no consistent influence on the perception of following fricative.
  • Mitterer, H., & McQueen, J. M. (2007). Tracking perception of pronunciation variation by tracking looks to printed words: The case of word-final /t/. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1929-1932). Dudweiler: Pirrot.

    Abstract

    We investigated perception of words with reduced word-final /t/ using an adapted eyetracking paradigm. Dutch listeners followed spoken instructions to click on printed words which were accompanied on a computer screen by simple shapes (e.g., a circle). Targets were either above or next to their shapes, and the shapes uniquely identified the targets when the spoken forms were ambiguous between words with or without final /t/ (e.g., bult, bump, vs. bul, diploma). Analysis of listeners’ eye-movements revealed, in contrast to earlier results, that listeners use the following segmental context when compensating for /t/-reduction. Reflecting that /t/-reduction is more likely to occur before bilabials, listeners were more likely to look at the /t/-final words if the next word’s first segment was bilabial. This result supports models of speech perception in which prelexical phonological processes use segmental context to modulate word recognition.
  • Mitterer, H. (2007). Behavior reflects the (degree of) reality of phonological features in the brain as well. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 127-130). Dudweiler: Pirrot.

    Abstract

    To assess the reality of phonological features in language processing (vs. language description), one needs to specify the distinctive claims of distinctive-feature theory. Two of the more farreaching claims are compositionality and generalizability. I will argue that there is some evidence for the first and evidence against the second claim from a recent behavioral paradigm. Highlighting the contribution of a behavioral paradigm also counterpoints the use of brain measures as the only way to elucidate what is "real for the brain". The contributions of the speakers exemplify how brain measures can help us to understand the reality of phonological features in language processing. The evidence is, however, not convincing for a) the claim for underspecification of phonological features—which has to deal with counterevidence from behavioral as well as brain measures—, and b) the claim of position independence of phonological features.
  • Mitterer, H., Brouwer, S., & Huettig, F. (2018). How important is prediction for understanding spontaneous speech? In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 26-40). Chennai: Macmillan Publishers India.
  • Monaghan, P., Brand, J., Frost, R. L. A., & Taylor, G. (2017). Multiple variable cues in the environment promote accurate and robust word learning. In G. Gunzelman, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 817-822). Retrieved from https://mindmodeling.org/cogsci2017/papers/0164/index.html.

    Abstract

    Learning how words refer to aspects of the environment is a complex task, but one that is supported by numerous cues within the environment which constrain the possibilities for matching words to their intended referents. In this paper we tested the predictions of a computational model of multiple cue integration for word learning, that predicted variation in the presence of cues provides an optimal learning situation. In a cross-situational learning task with adult participants, we varied the reliability of presence of distributional, prosodic, and gestural cues. We found that the best learning occurred when cues were often present, but not always. The effect of variability increased the salience of individual cues for the learner, but resulted in robust learning that was not vulnerable to individual cues’ presence or absence. Thus, variability of multiple cues in the language-learning environment provided the optimal circumstances for word learning.
  • Monteiro, M., Rieger, S., Steinmüller, U., & Skiba, R. (1997). Deutsch als Fremdsprache: Fachsprache im Ingenieurstudium. Frankfurt am Main: IKO - Verlag für Interkulturelle Kommunikation.
  • Mulder, K., Ten Bosch, L., & Boves, L. (2018). Analyzing EEG Signals in Auditory Speech Comprehension Using Temporal Response Functions and Generalized Additive Models. In Proceedings of Interspeech 2018 (pp. 1452-1456). doi:10.21437/Interspeech.2018-1676.

    Abstract

    Analyzing EEG signals recorded while participants are listening to continuous speech with the purpose of testing linguistic hypotheses is complicated by the fact that the signals simultaneously reflect exogenous acoustic excitation and endogenous linguistic processing. This makes it difficult to trace subtle differences that occur in mid-sentence position. We apply an analysis based on multivariate temporal response functions to uncover subtle mid-sentence effects. This approach is based on a per-stimulus estimate of the response of the neural system to speech input. Analyzing EEG signals predicted on the basis of the response functions might then bring to light conditionspecific differences in the filtered signals. We validate this approach by means of an analysis of EEG signals recorded with isolated word stimuli. Then, we apply the validated method to the analysis of the responses to the same words in the middle of meaningful sentences.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (Eds.). (2007). The linguistic encoding of multiple-participant events [Special Issue]. Linguistics, 45(3).

    Abstract

    This issue investigates the linguistic encoding of events with three or more participants from the perspectives of language typology and acquisition. Such “multiple-participant events” include (but are not limited to) any scenario involving at least three participants, typically encoded using transactional verbs like 'give' and 'show', placement verbs like 'put', and benefactive and applicative constructions like 'do (something for someone)', among others. There is considerable crosslinguistic and withinlanguage variation in how the participants (the Agent, Causer, Theme, Goal, Recipient, or Experiencer) and the subevents involved in multipleparticipant situations are encoded, both at the lexical and the constructional levels
  • Narasimhan, B., Bowerman, M., Brown, P., Eisenbeiss, S., & Slobin, D. I. (2004). "Putting things in places": Effekte linguisticher Typologie auf die Sprachentwicklung. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellschaft (pp. 659-663). Göttingen: Vandenhoeck & Ruprecht.

    Abstract

    Effekte linguisticher Typologie auf die Sprach-entwicklung. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellsch
  • Neijt, A., Schreuder, R., & Baayen, R. H. (2004). Seven years later: The effect of spelling on interpretation. In L. Cornips, & J. Doetjes (Eds.), Linguistics in the Netherlands 2004 (pp. 134-145). Amsterdam: Benjamins.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Noordman, L. G., & Vonk, W. (1997). The different functions of a conjunction in constructing a representation of the discourse. In J. Costermans, & M. Fayol (Eds.), Processing interclausal relationships: studies in the production and comprehension of text (pp. 75-94). Mahwah, NJ: Lawrence Erlbaum.
  • Norcliffe, E. (2018). Egophoricity and evidentiality in Guambiano (Nam Trik). In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 305-345). Amsterdam: Benjamins.

    Abstract

    Egophoric verbal marking is a typological feature common to Barbacoan languages, but otherwise unknown in the Andean sphere. The verbal systems of three out of the four living Barbacoan languages, Cha’palaa, Tsafiki and Awa Pit, have previously been shown to express egophoric contrasts. The status of Guambiano has, however, remained uncertain. In this chapter, I show that there are in fact two layers of egophoric or egophoric-like marking visible in Guambiano’s grammar. Guambiano patterns with certain other (non-Barbacoan) languages in having ego-categories which function within a broader evidential system. It is additionally possible to detect what is possibly a more archaic layer of egophoric marking in Guambiano’s verbal system. This marking may be inherited from a common Barbacoan system, thus pointing to a potential genealogical basis for the egophoric patterning common to these languages. The multiple formal expressions of egophoricity apparent both within and across the four languages reveal how egophoric contrasts are susceptible to structural renewal, suggesting a pan-Barbacoan preoccupation with the linguistic encoding of self-knowledge.
  • O'Connor, L. (2004). Going getting tired: Associated motion through space and time in Lowland Chontal. In M. Achard, & S. Kemmer (Eds.), Language, culture and mind (pp. 181-199). Stanford: CSLI.
  • O'Connor, L. (2007). Motion, transfer, and transformation: The grammar of change in Lowland Chontal. Amsterdam: Benjamins.

    Abstract

    Typologies are critical tools for linguists, but typologies, like grammars, are known to leak. This book addresses the question of typological overlap from the perspective of a single language. In Lowland Chontal of Oaxaca, a language of southern Mexico, change events are expressed with three types of predicates, and each predicate type corresponds to a different language type in the well-known typology of lexicalization patterns established by Talmy and elaborated by others. O’Connor evaluates the predictive powers of the typology by examining the consequences of each predicate type in a variety of contexts, using data from narrative discourse, stimulus response, and elicitation. This is the first de­tailed look at the lexical and grammatical resources of the verbal system in Chontal and their relation to semantics of change. The analysis of how and why Chontal speakers choose among these verbal resources to achieve particular communicative and social goals serves both as a documentation of an endangered language and a theoretical contribution towards a typology of language use.
  • Omar, R., Henley, S. M., Hailstone, J. C., Sauter, D., Scott, S. K., Fox, N. C., Rossor, M. N., & Warren, J. D. (2007). Recognition of emotions in faces, voices and music in frontotemporal lobar regeneration [Abstract]. Journal of Neurology, Neurosurgery & Psychiatry, 78(9), 1014.

    Abstract

    Frontotemporal lobar degeneration (FTLD) is a group of neurodegenerative conditions characterised by focal frontal and/or temporal lobe atrophy. Patients develop a range of cognitive and behavioural abnormalities, including prominent difficulties in comprehending and expressing emotions, with significant clinical and social consequences. Here we report a systematic prospective analysis of emotion processing in different input modalities in patients with FTLD. We examined recognition of happiness, sadness, fear and anger in facial expressions, non-verbal vocalisations and music in patients with FTLD and in healthy age matched controls. The FTLD group was significantly impaired in all modalities compared with controls, and this effect was most marked for music. Analysing each emotion separately, recognition of negative emotions was impaired in all three modalities in FTLD, and this effect was most marked for fear and anger. Recognition of happiness was deficient only with music. Our findings support the idea that FTLD causes impaired recognition of emotions across input channels, consistent with a common central representation of emotion concepts. Music may be a sensitive probe of emotional deficits in FTLD, perhaps because it requires a more abstract representation of emotion than do animate stimuli such as faces and voices.
  • O'Meara, C., & Majid, A. (2017). El léxico olfativo en la lengua seri. In A. L. M. D. Ruiz, & A. Z. Pérez (Eds.), La Dimensión Sensorial de la Cultura: Diez contribuciones al estudio de los sentidos en México. (pp. 101-118). Mexico City: Universidad Autónoma Metropolitana.
  • Ortega, G., Schiefner, A., & Ozyurek, A. (2017). Speakers’ gestures predict the meaning and perception of iconicity in signs. In G. Gunzelmann, A. Howe, & T. Tenbrink (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 889-894). Austin, TX: Cognitive Science Society.

    Abstract

    Sign languages stand out in that there is high prevalence of
    conventionalised linguistic forms that map directly to their
    referent (i.e., iconic). Hearing adults show low performance
    when asked to guess the meaning of iconic signs suggesting
    that their iconic features are largely inaccessible to them.
    However, it has not been investigated whether speakers’
    gestures, which also share the property of iconicity, may
    assist non-signers in guessing the meaning of signs. Results
    from a pantomime generation task (Study 1) show that
    speakers’ gestures exhibit a high degree of systematicity, and
    share different degrees of form overlap with signs (full,
    partial, and no overlap). Study 2 shows that signs with full
    and partial overlap are more accurately guessed and are
    assigned higher iconicity ratings than signs with no overlap.
    Deaf and hearing adults converge in their iconic depictions
    for some concepts due to the shared conceptual knowledge
    and manual-visual modality.
  • Ozyurek, A. (2007). Processing of multi-modal semantic information: Insights from cross-linguistic comparisons and neurophysiological recordings. In T. Sakamoto (Ed.), Communicating skills of intention (pp. 131-142). Tokyo: Hituzi Syobo Publishing.
  • Ozyurek, A. (2018). Cross-linguistic variation in children’s multimodal utterances. In M. Hickmann, E. Veneziano, & H. Jisa (Eds.), Sources of variation in first language acquisition: Languages, contexts, and learners (pp. 123-138). Amsterdam: Benjamins.

    Abstract

    Our ability to use language is multimodal and requires tight coordination between what is expressed in speech and in gesture, such as pointing or iconic gestures that convey semantic, syntactic and pragmatic information related to speakers’ messages. Interestingly, what is expressed in gesture and how it is coordinated with speech differs in speakers of different languages. This paper discusses recent findings on the development of children’s multimodal expressions taking cross-linguistic variation into account. Although some aspects of speech-gesture development show language-specificity from an early age, it might still take children until nine years of age to exhibit fully adult patterns of cross-linguistic variation. These findings reveal insights about how children coordinate different levels of representations given that their development is constrained by patterns that are specific to their languages.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A. (2017). Function and processing of gesture in the context of language. In R. B. Church, M. W. Alibali, & S. D. Kelly (Eds.), Why gesture? How the hands function in speaking, thinking and communicating (pp. 39-58). Amsterdam: John Benjamins Publishing. doi:10.1075/gs.7.03ozy.

    Abstract

    Most research focuses function of gesture independent of its link to the speech it accompanies and the coexpressive functions it has together with speech. This chapter instead approaches gesture in relation to its communicative function in relation to speech, and demonstrates how it is shaped by the linguistic encoding of a speaker’s message. Drawing on crosslinguistic research with adults and children as well as bilinguals on iconic/pointing gesture production it shows that the specific language speakers use modulates the rate and the shape of the iconic gesture production of the same events. The findings challenge the claims aiming to understand gesture’s function for “thinking only” in adults and during development.
  • Ozyurek, A., Kita, S., Allen, S., Furman, R., & Brown, A. (2007). How does linguistic framing of events influence co-speech gestures? Insights from crosslinguistic variations and similarities. In K. Liebal, C. Müller, & S. Pika (Eds.), Gestural communication in nonhuman and human primates (pp. 199-218). Amsterdam: Benjamins.

    Abstract

    What are the relations between linguistic encoding and gestural representations of events during online speaking? The few studies that have been conducted on this topic have yielded somewhat incompatible results with regard to whether and how gestural representations of events change with differences in the preferred semantic and syntactic encoding possibilities of languages. Here we provide large scale semantic, syntactic and temporal analyses of speech- gesture pairs that depict 10 different motion events from 20 Turkish and 20 English speakers. We find that the gestural representations of the same events differ across languages when they are encoded by different syntactic frames (i.e., verb-framed or satellite-framed). However, where there are similarities across languages, such as omission of a certain element of the event in the linguistic encoding, gestural representations also look similar and omit the same content. The results are discussed in terms of what gestures reveal about the influence of language specific encoding on on-line thinking patterns and the underlying interactions between speech and gesture during the speaking process.
  • Ozyurek, A. (2018). Role of gesture in language processing: Toward a unified account for production and comprehension. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), Oxford Handbook of Psycholinguistics (2nd ed., pp. 592-607). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198786825.013.25.

    Abstract

    Use of language in face-to-face context is multimodal. Production and perception of speech take place in the context of visual articulators such as lips, face, or hand gestures which convey relevant information to what is expressed in speech at different levels of language. While lips convey information at the phonological level, gestures contribute to semantic, pragmatic, and syntactic information, as well as to discourse cohesion. This chapter overviews recent findings showing that speech and gesture (e.g. a drinking gesture as someone says, “Would you like a drink?”) interact during production and comprehension of language at the behavioral, cognitive, and neural levels. Implications of these findings for current psycholinguistic theories and how they can be expanded to consider the multimodal context of language processing are discussed.
  • Pallier, C., Cutler, A., & Sebastian-Galles, N. (1997). Prosodic structure and phonetic processing: A cross-linguistic study. In Proceedings of EUROSPEECH 97 (pp. 2131-2134). Grenoble, France: ESCA.

    Abstract

    Dutch and Spanish differ in how predictable the stress pattern is as a function of the segmental content: it is correlated with syllable weight in Dutch but not in Spanish. In the present study, two experiments were run to compare the abilities of Dutch and Spanish speakers to separately process segmental and stress information. It was predicted that the Spanish speakers would have more difficulty focusing on the segments and ignoring the stress pattern than the Dutch speakers. The task was a speeded classification task on CVCV syllables, with blocks of trials in which the stress pattern could vary versus blocks in which it was fixed. First, we found interference due to stress variability in both languages, suggesting that the processing of segmental information cannot be performed independently of stress. Second, the effect was larger for Spanish than for Dutch, suggesting that that the degree of interference from stress variation may be partially mitigated by the predictability of stress placement in the language.
  • Papafragou, A., & Ozturk, O. (2007). Children's acquisition of modality. In Proceedings of the 2nd Conference on Generative Approaches to Language Acquisition North America (GALANA 2) (pp. 320-327). Somerville, Mass.: Cascadilla Press.
  • Papafragou, A. (2007). On the acquisition of modality. In T. Scheffler, & L. Mayol (Eds.), Penn Working Papers in Linguistics. Proceedings of the 30th Annual Penn Linguistics Colloquium (pp. 281-293). Department of Linguistics, University of Pennsylvania.
  • Pawley, A., & Hammarström, H. (2018). The Trans New Guinea family. In B. Palmer (Ed.), Papuan Languages and Linguistics (pp. 21-196). Berlin: De Gruyter Mouton.
  • Perlman, M., Fusaroli, R., Fein, D., & Naigles, L. (2017). The use of iconic words in early child-parent interactions. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 913-918). Austin, TX: Cognitive Science Society.

    Abstract

    This paper examines the use of iconic words in early conversations between children and caregivers. The longitudinal data include a span of six observations of 35 children-parent dyads in the same semi-structured activity. Our findings show that children’s speech initially has a high proportion of iconic words, and over time, these words become diluted by an increase of arbitrary words. Parents’ speech is also initially high in iconic words, with a decrease in the proportion of iconic words over time – in this case driven by the use of fewer iconic words. The level and development of iconicity are related to individual differences in the children’s cognitive skills. Our findings fit with the hypothesis that iconicity facilitates early word learning and may play an important role in learning to produce new words.
  • Perniss, P. M., Pfau, R., & Steinbach, M. (Eds.). (2007). Visible variation: Cross-linguistic studies in sign language structure. Berlin: Mouton de Gruyter.

    Abstract

    It has been argued that properties of the visual-gestural modality impose a homogenizing effect on sign languages, leading to less structural variation in sign language structure as compared to spoken language structure. However, until recently, research on sign languages was limited to a number of (Western) sign languages. Before we can truly answer the question of whether modality effects do indeed cause less structural variation, it is necessary to investigate the similarities and differences that exist between sign languages in more detail and, especially, to include in this investigation less studied sign languages. The current research climate is testimony to a surge of interest in the study of a geographically more diverse range of sign languages. The volume reflects that climate and brings together work by scholars engaging in comparative sign linguistics research. The 11 articles discuss data from many different signed and spoken languages and cover a wide range of topics from different areas of grammar including phonology (word pictures), morphology (pronouns, negation, and auxiliaries), syntax (word order, interrogative clauses, auxiliaries, negation, and referential shift) and pragmatics (modal meaning and referential shift). In addition to this, the contributions address psycholinguistic issues, aspects of language change, and issues concerning data collection in sign languages, thereby providing methodological guidelines for further research. Although some papers use a specific theoretical framework for analyzing the data, the volume clearly focuses on empirical and descriptive aspects of sign language variation.
  • Perniss, P. M., Pfau, R., & Steinbach, M. (2007). Can't you see the difference? Sources of variation in sign language structure. In P. M. Perniss, R. Pfau, & M. Steinbach (Eds.), Visible variation: Cross-linguistic studies in sign language narratives (pp. 1-34). Berlin: Mouton de Gruyter.
  • Perniss, P. M. (2007). Locative functions of simultaneous perspective constructions in German sign language narrative. In M. Vermeerbergen, L. Leeson, & O. Crasborn (Eds.), Simultaneity in signed language: Form and function (pp. 27-54). Amsterdam: Benjamins.

Share this page