Publications

Displaying 301 - 400 of 584
  • Kuntay, A., & Ozyurek, A. (2002). Joint attention and the development of the use of demonstrative pronouns in Turkish. In B. Skarabela, S. Fish, & A. H. Do (Eds.), Proceedings of the 26th annual Boston University Conference on Language Development (pp. 336-347). Somerville, MA: Cascadilla Press.
  • Kuzla, C., & Ernestus, M. (2007). Prosodic conditioning of phonetic detail of German plosives. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 461-464). Dudweiler: Pirrot.

    Abstract

    The present study investigates the influence of prosodic structure on the fine-grained phonetic details of German plosives which also cue the phonological fortis-lenis contrast. Closure durations were found to be longer at higher prosodic boundaries. There was also less glottal vibration in lenis plosives at higher prosodic boundaries. Voice onset time in lenis plosives was not affected by prosody. In contrast, for the fortis plosives VOT decreased at higher boundaries, as did the maximal intensity of the release. These results demonstrate that the effects of prosody on different phonetic cues can go into opposite directions, but are overall constrained by the need to maintain phonological contrasts. While prosodic effects on some cues are compatible with a ‘fortition’ account of prosodic strengthening or with a general feature enhancement explanation, the effects on others enhance paradigmatic contrasts only within a given prosodic position.
  • Kuzla, C., Ernestus, M., & Mitterer, H. (2010). Compensation for assimilatory devoicing and prosodic structure in German fricative perception. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (Eds.), Laboratory Phonology 10 (pp. 731-757). Berlin: De Gruyter.
  • Ladd, D. R., & Cutler, A. (1983). Models and measurements in the study of prosody. In A. Cutler, & D. R. Ladd (Eds.), Prosody: Models and measurements (pp. 1-10). Heidelberg: Springer.
  • Lai, V. T., Chang, M., Duffield, C., Hwang, J., Xue, N., & Palmer, M. (2007). Defining a methodology for mapping Chinese and English sense inventories. In Proceedings of the 8th Chinese Lexical Semantics Workshop 2007 (CLSW 2007). The Hong Kong Polytechnic University, Hong Kong, May 21-23 (pp. 59-65).

    Abstract

    In this study, we explored methods for linking Chinese and English sense inventories using two opposing approaches: creating links (1) bottom-up: by starting at the finer-grained sense level then proceeding to the verb subcategorization frames and (2) top-down: by starting directly with the more coarse-grained frame levels. The sense inventories for linking include pre-existing corpora, such as English Propbank (Palmer, Gildea, and Kingsbury, 2005), Chinese Propbank (Xue and Palmer, 2004) and English WordNet (Fellbaum, 1998) and newly created corpora, the English and Chinese Sense Inventories from DARPA-GALE OntoNotes. In the linking task, we selected a group of highly frequent and polysemous communication verbs, including say, ask, talk, and speak in English, and shuo, biao-shi, jiang, and wen in Chinese. We found that with the bottom-up method, although speakers of both languages agreed on the links between senses, the subcategorization frames of the corresponding senses did not match consistently. With the top-down method, if the verb frames match in both languages, their senses line up more quickly to each other. The results indicate that the top-down method is more promising in linking English and Chinese sense inventories.
  • Lai, J., & Poletiek, F. H. (2010). The impact of starting small on the learnability of recursion. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32rd Annual Conference of the Cognitive Science Society (CogSci 2010) (pp. 1387-1392). Austin, TX, USA: Cognitive Science Society.
  • Lansner, A., Sandberg, A., Petersson, K. M., & Ingvar, M. (2000). On forgetful attractor network memories. In H. Malmgren, M. Borga, & L. Niklasson (Eds.), Artificial neural networks in medicine and biology: Proceedings of the ANNIMAB-1 Conference, Göteborg, Sweden, 13-16 May 2000 (pp. 54-62). Heidelberg: Springer Verlag.

    Abstract

    A recurrently connected attractor neural network with a Hebbian learning rule is currently our best ANN analogy for a piece cortex. Functionally biological memory operates on a spectrum of time scales with regard to induction and retention, and it is modulated in complex ways by sub-cortical neuromodulatory systems. Moreover, biological memory networks are commonly believed to be highly distributed and engage many co-operating cortical areas. Here we focus on the temporal aspects of induction and retention of memory in a connectionist type attractor memory model of a piece of cortex. A continuous time, forgetful Bayesian-Hebbian learning rule is described and compared to the characteristics of LTP and LTD seen experimentally. More generally, an attractor network implementing this learning rule can operate as a long-term, intermediate-term, or short-term memory. Modulation of the print-now signal of the learning rule replicates some experimental memory phenomena, like e.g. the von Restorff effect.
  • Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2018). Mammalian models for the study of vocal learning: A new paradigm in bats. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 235-237). Toruń, Poland: NCU Press. doi:10.12775/3991-1.056.
  • Lauscher, A., Eckert, K., Galke, L., Scherp, A., Rizvi, S. T. R., Ahmed, S., Dengel, A., Zumstein, P., & Klein, A. (2018). Linked open citation database: Enabling libraries to contribute to an open and interconnected citation graph. In J. Chen, M. A. Gonçalves, J. M. Allen, E. A. Fox, M.-Y. Kan, & V. Petras (Eds.), JCDL '18: Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries (pp. 109-118). New York: ACM. doi:10.1145/3197026.3197050.

    Abstract

    Citations play a crucial role in the scientific discourse, in information retrieval, and in bibliometrics. Many initiatives are currently promoting the idea of having free and open citation data. Creation of citation data, however, is not part of the cataloging workflow in libraries nowadays.
    In this paper, we present our project Linked Open Citation Database, in which we design distributed processes and a system infrastructure based on linked data technology. The goal is to show that efficiently cataloging citations in libraries using a semi-automatic approach is possible. We specifically describe the current state of the workflow and its implementation. We show that we could significantly improve the automatic reference extraction that is crucial for the subsequent data curation. We further give insights on the curation and linking process and provide evaluation results that not only direct the further development of the project, but also allow us to discuss its overall feasibility.
  • Lecumberri, M. L. G., Cooke, M., & Cutler, A. (Eds.). (2010). Non-native speech perception in adverse conditions [Special Issue]. Speech Communication, 52(11/12).
  • Lefever, E., Hendrickx, I., Croijmans, I., Van den Bosch, A., & Majid, A. (2018). Discovering the language of wine reviews: A text mining account. In N. Calzolari, K. Choukri, C. Cieri, T. Declerck, S. Goggi, K. Hasida, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis, & T. Tokunaga (Eds.), Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) (pp. 3297-3302). Paris: LREC.

    Abstract

    It is widely held that smells and flavors are impossible to put into words. In this paper we test this claim by seeking predictive patterns in wine reviews, which ostensibly aim to provide guides to perceptual content. Wine reviews have previously been critiqued as random and meaningless. We collected an English corpus of wine reviews with their structured metadata, and applied machine learning techniques to automatically predict the wine's color, grape variety, and country of origin. To train the three supervised classifiers, three different information sources were incorporated: lexical bag-of-words features, domain-specific terminology features, and semantic word embedding features. In addition, using regression analysis we investigated basic review properties, i.e., review length, average word length, and their relationship to the scalar values of price and review score. Our results show that wine experts do share a common vocabulary to describe wines and they use this in a consistent way, which makes it possible to automatically predict wine characteristics based on the review text alone. This means that odors and flavors may be more expressible in language than typically acknowledged.
  • Levelt, W. J. M. (2002). Phonological encoding in speech production: Comments on Jurafsky et al., Schiller et al., and van Heuven & Haan. In C. Gussenhoven, & N. Warner (Eds.), Laboratory phonology VII (pp. 87-99). Berlin: Mouton de Gruyter.
  • Levelt, W. J. M., & Ruijssenaars, A. (1995). Levensbericht Johan Joseph Dumont. In Jaarboek Koninklijke Nederlandse Akademie van Wetenschappen (pp. 31-36).
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (2002). A theory of lexical access in speech production. In G. T. Altmann (Ed.), Psycholinguistics: critical concepts in psychology (pp. 278-377). London: Routledge.
  • Levelt, W. J. M. (1995). Chapters of psychology: An interview with Wilhelm Wundt. In R. L. Solso, & D. W. Massaro (Eds.), The science of mind: 2001 and beyond (pp. 184-202). Oxford University Press.
  • Levelt, W. J. M. (1984). Geesteswetenschappelijke theorie als kompas voor de gangbare mening. In S. Dresden, & D. Van de Kaa (Eds.), Wetenschap ten goede en ten kwade (pp. 42-52). Amsterdam: North Holland.
  • Levelt, W. J. M. (2007). Levensbericht Detlev W. Ploog. In Levensberichten en herdenkingen 2007 (pp. 60-63). Amsterdam: Koninklijke Nederlandse Akademie van Wetenschappen.
  • Levelt, W. J. M. (2000). Introduction Section VII: Language. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences; 2nd ed. (pp. 843-844). Cambridge: MIT Press.
  • Levelt, W. J. M. (1995). Psycholinguistics. In C. C. French, & A. M. Colman (Eds.), Cognitive psychology (reprint, pp. 39- 57). London: Longman.
  • Levelt, W. J. M. (2000). Psychology of language. In K. Pawlik, & M. R. Rosenzweig (Eds.), International handbook of psychology (pp. 151-167). London: SAGE publications.
  • Levelt, W. J. M. (1984). Some perceptual limitations on talking about space. In A. J. Van Doorn, W. A. Van de Grind, & J. J. Koenderink (Eds.), Limits in perception (pp. 323-358). Utrecht: VNU Science Press.
  • Levelt, W. J. M. (2000). Speech production. In A. E. Kazdin (Ed.), Encyclopedia of psychology (pp. 432-433). Oxford University Press.
  • Levelt, W. J. M. (1984). Spontaneous self-repairs in speech: Processes and representations. In M. P. R. Van den Broecke, & A. Cohen (Eds.), Proceedings of the 10th International Congress of Phonetic Sciences (pp. 105-117). Dordrecht: Foris.
  • Levelt, W. J. M. (1983). The speaker's organization of discourse. In Proceedings of the XIIIth International Congress of Linguists (pp. 278-290).
  • Levelt, W. J. M., & Indefrey, P. (2000). The speaking mind/brain: Where do spoken words come from? In A. Marantz, Y. Miyashita, & W. O'Neil (Eds.), Image, language, brain: Papers from the First Mind Articulation Project Symposium (pp. 77-94). Cambridge, Mass.: MIT Press.
  • Levinson, S. C. (2007). Optimizing person reference - perspectives from usage on Rossel Island. In N. Enfield, & T. Stivers (Eds.), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 29-72). Cambridge: Cambridge University Press.

    Abstract

    This chapter explicates the requirement in person–reference for balancing demands for recognition, minimalization, explicitness and indirection. This is illustrated with reference to data from repair of failures of person–reference within a particular linguistic/cultural context, namely casual interaction among Rossel Islanders. Rossel Island (PNG) offers a ‘natural experiment’ for studying aspects of person reference, because of a number of special properties: 1. It is a closed universe of 4000 souls, sharing one kinship network, so in principle anyone could be recognizable from a reference. As a result no (complex) descriptions (cf. ‘ the author of Waverly’) are employed. 2. Names, however, are never uniquely referring, since they are drawn from a fixed pool. They are only used for about 25% of initial references, another 25% of initial references being done by kinship triangulation (‘that man’s father–in–law’). Nearly 50% of initial references are semantically underspecified or vague (e.g. ‘that girl’). 3. There are systematic motivations for oblique reference, e.g. kinship–based taboos and other constraints, which partly account for the underspecified references. The ‘natural experiment’ thus reveals some gneral lessons about how person–reference requires optimizing multiple conflicting constraints. Comparison with Sacks and Schegloff’s (1979) treatment of English person reference suggests a way to tease apart the universal and the culturally–particular.
  • Levinson, S. C. (1995). 'Logical' Connectives in Natural Language: A First Questionnaire. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 61-69). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513476.

    Abstract

    It has been hypothesised that human reasoning has a non-linguistic foundation, but is nevertheless influenced by the formal means available in a language. For example, Western logic is transparently related to European sentential connectives (e.g., and, if … then, or, not), some of which cannot be unambiguously expressed in other languages. The questionnaire explores reasoning tools and practices through investigating translation equivalents of English sentential connectives and collecting examples of “reasoned arguments”.
  • Levinson, S. C., Senft, G., & Majid, A. (2007). Emotion categories in language and thought. In A. Majid (Ed.), Field Manual Volume 10 (pp. 46-52). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492892.
  • Levinson, S. C. (2002). Appendix to the 2002 Supplement, version 1, for the “Manual” for the field season 2001. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 62-64). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (2010). Generalized conversational implicature. In L. Cummings (Ed.), The pragmatics encyclopedia (pp. 201-203). London: Routledge.
  • Levinson, S. C. (2002). Landscape terms and place names in Yélî Dnye, the language of Rossel Island, PNG. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 8-13). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (2000). Language as nature and language as art. In J. Mittelstrass, & W. Singer (Eds.), Proceedings of the Symposium on ‘Changing concepts of nature and the turn of the Millennium (pp. 257-287). Vatican City: Pontificae Academiae Scientiarium Scripta Varia.
  • Levinson, S. C., Majid, A., & Enfield, N. J. (2007). Language of perception: The view from language and culture. In A. Majid (Ed.), Field Manual Volume 10 (pp. 10-21). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.468738.
  • Levinson, S. C. (2000). H.P. Grice on location on Rossel Island. In S. S. Chang, L. Liaw, & J. Ruppenhofer (Eds.), Proceedings of the 25th Annual Meeting of the Berkeley Linguistic Society (pp. 210-224). Berkeley: Berkeley Linguistic Society.
  • Levinson, S. C. (1995). Interactional biases in human thinking. In E. N. Goody (Ed.), Social intelligence and interaction (pp. 221-260). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2018). Introduction: Demonstratives: Patterns in diversity. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 1-42). Cambridge: Cambridge University Press.
  • Levinson, S. C., & Majid, A. (2007). The language of sound. In A. Majid (Ed.), Field Manual Volume 10 (pp. 29-31). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.468735.
  • Levinson, S. C., & Majid, A. (2007). The language of vision II: Shape. In A. Majid (Ed.), Field Manual Volume 10 (pp. 26-28). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.468732.
  • Levinson, S. C. (1995). Three levels of meaning. In F. Palmer (Ed.), Grammar and meaning: Essays in honour of Sir John Lyons (pp. 90-115). Cambridge University Press.
  • Levinson, S. C. (2018). Yélî Dnye: Demonstratives in the language of Rossel Island, Papua New Guinea. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 318-342). Cambridge: Cambridge University Press.
  • Lindström, E., Terrill, A., Reesink, G., & Dunn, M. (2007). The languages of Island Melanesia. In J. S. Friedlaender (Ed.), Genes, language, and culture history in the Southwest Pacific (pp. 118-140). Oxford: Oxford University Press.

    Abstract

    This chapter provides an overview of the Papuan and the Oceanic languages (a branch of Austronesian) in Northern Island Melanesia, as well as phenomena arising through contact between these groups. It shows how linguistics can contribute to the understanding of the history of languages and speakers, and what the findings of those methods have been. The location of the homeland of speakers of Proto-Oceanic is indicated (in northeast New Britain); many facets of the lives of those speakers are shown; and the patterns of their subsequent spread across Island Melanesia and beyond into Remote Oceania are indicated, followed by a second wave overlaying the first into New Guinea and as far as halfway through the Solomon Islands. Regarding the Papuan languages of this region, at least some are older than the 6,000-10,000 ceiling of the Comparative Method, and their relations are explored with the aid of a database of 125 non-lexical structural features. The results reflect archipelago-based clustering with the Central Solomons Papuan languages forming a clade either with the Bismarcks or with Bougainville languages. Papuan languages in Bougainville are less influenced by Oceanic languages than those in the Bismarcks and the Solomons. The chapter considers a variety of scenarios to account for their findings, concluding that the results are compatible with multiple pre-Oceanic waves of arrivals into the area after initial settlement.
  • Liszkowski, U. (2010). Before L1: A differentiated perspective on infant gestures. In M. Gullberg, & K. De Bot (Eds.), Gestures in language development (pp. 35-51). Amsterdam: Benjamins.
  • Liszkowski, U. (2007). Human twelve-month-olds point cooperatively to share interest with and helpfully provide information for a communicative partner. In K. Liebal, C. Müller, & S. Pika (Eds.), Gestural communication in nonhuman and human primates (pp. 124-140). Amsterdam: Benjamins.

    Abstract

    This paper investigates infant pointing at 12 months. Three recent experimental studies from our lab are reported and contrasted with existing accounts on infant communicative and social-cognitive abilities. The new results show that infant pointing at 12 months already is a communicative act which involves the intentional transmission of information to share interest with, or provide information for other persons. It is argued that infant pointing is an inherently social and cooperative act which is used to share psychological relations between interlocutors and environment, repairs misunderstandings in proto-conversational turn-taking, and helps others by providing information. Infant pointing builds on an understanding of others as persons with attentional states and attitudes. Findings do not support lean accounts on early infant pointing which posit that it is initially non-communicative, does not serve the function of indicating, or is purely self-centered. It is suggested to investigate the emergence of reference and the motivation to jointly engage with others also before pointing has emerged.
  • Liszkowski, U., & Brown, P. (2007). Infant pointing (9-15 months) in different cultures. In A. Majid (Ed.), Field Manual Volume 10 (pp. 82-88). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492895.

    Abstract

    There are two tasks for conducting systematic observation of child-caregiver joint attention interactions. Task 1 – a “decorated room” designed to elicit infant and caregiver pointing. Task 2 – videotaped interviews about infant pointing behaviour. The goal of this task is to document the ontogenetic emergence of referential communication in caregiver infant interaction in different cultures, during the critical age of 8-15 months when children come to understand and share others’ intentions. This is of interest to all students of interaction and human communication; it does not require specialist knowledge of children.
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., Nijhof, A., & Willems, R. M. (2018). The Narrative Brain Dataset (NBD), an fMRI dataset for the study of natural language processing in the brain. In B. Devereux, E. Shutova, & C.-R. Huang (Eds.), Proceedings of LREC 2018 Workshop "Linguistic and Neuro-Cognitive Resources (LiNCR) (pp. 8-11). Paris: LREC.

    Abstract

    We present the Narrative Brain Dataset, an fMRI dataset that was collected during spoken presentation of short excerpts of three
    stories in Dutch. Together with the brain imaging data, the dataset contains the written versions of the stimulation texts. The texts are
    accompanied with stochastic (perplexity and entropy) and semantic computational linguistic measures. The richness and unconstrained
    nature of the data allows the study of language processing in the brain in a more naturalistic setting than is common for fMRI studies.
    We hope that by making NBD available we serve the double purpose of providing useful neural data to researchers interested in natural
    language processing in the brain and to further stimulate data sharing in the field of neuroscience of language.
  • Lupyan, G., Wendorf, A., Berscia, L. M., & Paul, J. (2018). Core knowledge or language-augmented cognition? The case of geometric reasoning. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 252-254). Toruń, Poland: NCU Press. doi:10.12775/3991-1.062.
  • Mai, F., Galke, L., & Scherp, A. (2018). Using deep learning for title-based semantic subject indexing to reach competitive performance to full-text. In J. Chen, M. A. Gonçalves, J. M. Allen, E. A. Fox, M.-Y. Kan, & V. Petras (Eds.), JCDL '18: Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries (pp. 169-178). New York: ACM.

    Abstract

    For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.4%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.
  • Majid, A. (2018). Cultural factors shape olfactory language [Reprint]. In D. Howes (Ed.), Senses and Sensation: Critical and Primary Sources. Volume 3 (pp. 307-310). London: Bloomsbury Publishing.
  • Majid, A., & Bowerman, M. (Eds.). (2007). Cutting and breaking events: A crosslinguistic perspective [Special Issue]. Cognitive Linguistics, 18(2).

    Abstract

    This special issue of Cognitive Linguistics explores the linguistic encoding of events of cutting and breaking. In this article we first introduce the project on which it is based by motivating the selection of this conceptual domain, presenting the methods of data collection used by all the investigators, and characterizing the language sample. We then present a new approach to examining crosslinguistic similarities and differences in semantic categorization. Applying statistical modeling to the descriptions of cutting and breaking events elicited from speakers of all the languages, we show that although there is crosslinguistic variation in the number of distinctions made and in the placement of category boundaries, these differences take place within a strongly constrained semantic space: across languages, there is a surprising degree of consensus on the partitioning of events in this domain. In closing, we compare our statistical approach with more conventional semantic analyses, and show how an extensional semantic typological approach like the one illustrated here can help illuminate the intensional distinctions made by languages.
  • Majid, A. (2018). Language and cognition. In H. Callan (Ed.), The International Encyclopedia of Anthropology. Hoboken: John Wiley & Sons Ltd.

    Abstract

    What is the relationship between the language we speak and the way we think? Researchers working at the interface of language and cognition hope to understand the complex interplay between linguistic structures and the way the mind works. This is thorny territory in anthropology and its closely allied disciplines, such as linguistics and psychology.

    Additional information

    home page encyclopedia
  • Majid, A., & Levinson, S. C. (2007). Language of perception: Overview of field tasks. In A. Majid (Ed.), Field Manual Volume 10 (pp. 8-9). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492898.
  • Majid, A. (2007). Preface and priorities. In A. Majid (Ed.), Field manual volume 10 (pp. 3). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Majid, A., Senft, G., & Levinson, S. C. (2007). The language of olfaction. In A. Majid (Ed.), Field Manual Volume 10 (pp. 36-41). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492910.
  • Majid, A., Senft, G., & Levinson, S. C. (2007). The language of touch. In A. Majid (Ed.), Field Manual Volume 10 (pp. 32-35). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492907.
  • Majid, A., & Levinson, S. C. (2007). The language of vision I: colour. In A. Majid (Ed.), Field Manual Volume 10 (pp. 22-25). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492901.
  • Majid, A. (2010). Words for parts of the body. In B. C. Malt, & P. Wolff (Eds.), Words and the Mind: How words capture human experience (pp. 58-71). New York: Oxford University Press.
  • Malaisé, V., Gazendam, L., & Brugman, H. (2007). Disambiguating automatic semantic annotation based on a thesaurus structure. In Proceedings of TALN 2007.
  • Mamus, E., & Karadöller, D. Z. (2018). Anıları Zihinde Canlandırma [Imagery in autobiographical memories]. In S. Gülgöz, B. Ece, & S. Öner (Eds.), Hayatı Hatırlamak: Otobiyografik Belleğe Bilimsel Yaklaşımlar [Remembering Life: Scientific Approaches to Autobiographical Memory] (pp. 185-200). Istanbul, Turkey: Koç University Press.
  • Mani, N., Mishra, R. K., & Huettig, F. (2018). Introduction to 'The Interactive Mind: Language, Vision and Attention'. In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 1-2). Chennai: Macmillan Publishers India.
  • Martin, A., & Van Turennout, M. (2002). Searching for the neural correlates of object priming. In L. R. Squire, & D. L. Schacter (Eds.), The Neuropsychology of Memory (pp. 239-247). New York: Guilford Press.
  • Massaro, D. W., & Jesse, A. (2007). Audiovisual speech perception and word recognition. In M. G. Gaskell (Ed.), The Oxford handbook of psycholinguistics (pp. 19-35). Oxford: Oxford University Press.

    Abstract

    In most of our everyday conversations, we not only hear but also see each other talk. Our understanding of speech benefits from having the speaker's face present. This finding immediately necessitates the question of how the information from the different perceptual sources is used to reach the best overall decision. This need for processing of multiple sources of information also exists in auditory speech perception, however. Audiovisual speech simply shifts the focus from intramodal to intermodal sources but does not necessitate a qualitatively different form of processing. It is essential that a model of speech perception operationalizes the concept of processing multiple sources of information so that quantitative predictions can be made. This chapter gives an overview of the main research questions and findings unique to audiovisual speech perception and word recognition research as well as what general questions about speech perception and cognition the research in this field can answer. The main theoretical approaches to explain integration and audiovisual speech perception are introduced and critically discussed. The chapter also provides an overview of the role of visual speech as a language learning tool in multimodal training.
  • Matic, D. (2010). Discourse and syntax in linguistic change: Decline of postverbal topical subjects in Serbo-Croat. In G. Ferraresi, & R. Lühr (Eds.), Diachronic studies on information structure: Language acquisition and change (pp. 117-142). Berlin: Mouton de Gruyter.
  • Matsuo, A., & Duffield, N. (2002). Assessing the generality of knowledge about English ellipsis in SLA. In J. Costa, & M. J. Freitas (Eds.), Proceedings of the GALA 2001 Conference on Language Acquisition (pp. 49-53). Lisboa: Associacao Portuguesa de Linguistica.
  • Matsuo, A., & Duffield, N. (2002). Finiteness and parallelism: Assessing the generality of knowledge about English ellipsis in SLA. In B. Skarabela, S. Fish, & A.-H.-J. Do (Eds.), Proceedings of the 26th Boston University Conference on Language Development (pp. 197-207). Somerville, Massachusetts: Cascadilla Press.
  • Mauner, G., Koenig, J.-P., Melinger, A., & Bienvenue, B. (2002). The lexical source of unexpressed participants and their role in sentence and discourse understanding. In P. Merlo, & S. Stevenson (Eds.), The Lexical Basis of Sentence Processing: Formal, Computational and Experimental Issues (pp. 233-254). Amsterdam: John Benjamins.
  • Mazzone, M., & Campisi, E. (2010). Embodiment, metafore, comunicazione. In G. P. Storari, & E. Gola (Eds.), Forme e formalizzazioni. Atti del XVI congresso nazionale. Cagliari: CUEC.
  • Mazzone, M., & Campisi, E. (2010). Are there communicative intentions? In L. A. Pérez Miranda, & A. I. Madariaga (Eds.), Advances in cognitive science. IWCogSc-10. Proceedings of the ILCLI International Workshop on Cognitive Science Workshop on Cognitive Science (pp. 307-322). Bilbao, Spain: The University of the Basque Country.

    Abstract

    Grice in pragmatics and Levelt in psycholinguistics have proposed models of human communication where the starting point of communicative action is an individual intention. This assumption, though, has to face serious objections with regard to the alleged existence of explicit representations of the communicative goals to be pursued. Here evidence is surveyed which shows that in fact speaking may ordinarily be a quite automatic activity prompted by contextual cues and driven by behavioural schemata abstracted away from social regularities. On the one hand, this means that there could exist no intentions in the sense of explicit representations of communicative goals, following from deliberate reasoning and triggering the communicative action. On the other hand, however, there are reasons to allow for a weaker notion of intention than this, according to which communication is an intentional affair, after all. Communicative action is said to be intentional in this weaker sense to the extent that it is subject to a double mechanism of control, with respect both to present-directed and future-directed intentions.
  • McQueen, J. M., & Cutler, A. (2010). Cognitive processes in speech perception. In W. J. Hardcastle, J. Laver, & F. E. Gibbon (Eds.), The handbook of phonetic sciences (2nd ed., pp. 489-520). Oxford: Blackwell.
  • McQueen, J. M. (2007). Eight questions about spoken-word recognition. In M. G. Gaskell (Ed.), The Oxford handbook of psycholinguistics (pp. 37-53). Oxford: Oxford University Press.

    Abstract

    This chapter is a review of the literature in experimental psycholinguistics on spoken word recognition. It is organized around eight questions. 1. Why are psycholinguists interested in spoken word recognition? 2. What information in the speech signal is used in word recognition? 3. Where are the words in the continuous speech stream? 4. Which words did the speaker intend? 5. When, as the speech signal unfolds over time, are the phonological forms of words recognized? 6. How are words recognized? 7. Whither spoken word recognition? 8. Who are the researchers in the field?
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Positive and negative influences of the lexicon on phonemic decision-making. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 3 (pp. 778-781). Beijing: China Military Friendship Publish.

    Abstract

    Lexical knowledge influences how human listeners make decisions about speech sounds. Positive lexical effects (faster responses to target sounds in words than in nonwords) are robust across several laboratory tasks, while negative effects (slower responses to targets in more word-like nonwords than in less word-like nonwords) have been found in phonetic decision tasks but not phoneme monitoring tasks. The present experiments tested whether negative lexical effects are therefore a task-specific consequence of the forced choice required in phonetic decision. We compared phoneme monitoring and phonetic decision performance using the same Dutch materials in each task. In both experiments there were positive lexical effects, but no negative lexical effects. We observe that in all studies showing negative lexical effects, the materials were made by cross-splicing, which meant that they contained perceptual evidence supporting the lexically-consistent phonemes. Lexical knowledge seems to influence phonemic decision-making only when there is evidence for the lexically-consistent phoneme in the speech signal.
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Why Merge really is autonomous and parsimonious. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 47-50). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    We briefly describe the Merge model of phonemic decision-making, and, in the light of general arguments about the possible role of feedback in spoken-word recognition, defend Merge's feedforward structure. Merge not only accounts adequately for the data, without invoking feedback connections, but does so in a parsimonious manner.
  • Merkx, D., & Scharenborg, O. (2018). Articulatory feature classification using convolutional neural networks. In Proceedings of Interspeech 2018 (pp. 2142-2146). doi:10.21437/Interspeech.2018-2275.

    Abstract

    The ultimate goal of our research is to improve an existing speech-based computational model of human speech recognition on the task of simulating the role of fine-grained phonetic information in human speech processing. As part of this work we are investigating articulatory feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Different approaches have been used to build AF classifiers, most notably multi-layer perceptrons. Recently, deep neural networks have been applied to the task of AF classification. This paper aims to improve AF classification by investigating two different approaches: 1) investigating the usefulness of a deep Convolutional neural network (CNN) for AF classification; 2) integrating the Mel filtering operation into the CNN architecture. The results showed a remarkable improvement in classification accuracy of the CNNs over state-of-the-art AF classification results for Dutch, most notably in the minority classes. Integrating the Mel filtering operation into the CNN architecture did not further improve classification performance.
  • Micklos, A., Macuch Silva, V., & Fay, N. (2018). The prevalence of repair in studies of language evolution. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 316-318). Toruń, Poland: NCU Press. doi:10.12775/3991-1.075.
  • Mitterer, H. (2007). Top-down effects on compensation for coarticulation are not replicable. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1601-1604). Adelaide: Causal Productions.

    Abstract

    Listeners use lexical knowledge to judge what speech sounds they heard. I investigated whether such lexical influences are truly top-down or just reflect a merging of perceptual and lexical constraints. This is achieved by testing whether the lexically determined identity of a phone exerts the appropriate context effects on surrounding phones. The current investigations focuses on compensation for coarticulation in vowel-fricative sequences, where the presence of a rounded vowel (/y/ rather than /i/) leads fricatives to be perceived as /s/ rather than //. This results was consistently found in all three experiments. A vowel was also more likely to be perceived as rounded /y/ if that lead listeners to be perceive words rather than nonwords (Dutch: meny, English id. vs. meni nonword). This lexical influence on the perception of the vowel had, however, no consistent influence on the perception of following fricative.
  • Mitterer, H., & McQueen, J. M. (2007). Tracking perception of pronunciation variation by tracking looks to printed words: The case of word-final /t/. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1929-1932). Dudweiler: Pirrot.

    Abstract

    We investigated perception of words with reduced word-final /t/ using an adapted eyetracking paradigm. Dutch listeners followed spoken instructions to click on printed words which were accompanied on a computer screen by simple shapes (e.g., a circle). Targets were either above or next to their shapes, and the shapes uniquely identified the targets when the spoken forms were ambiguous between words with or without final /t/ (e.g., bult, bump, vs. bul, diploma). Analysis of listeners’ eye-movements revealed, in contrast to earlier results, that listeners use the following segmental context when compensating for /t/-reduction. Reflecting that /t/-reduction is more likely to occur before bilabials, listeners were more likely to look at the /t/-final words if the next word’s first segment was bilabial. This result supports models of speech perception in which prelexical phonological processes use segmental context to modulate word recognition.
  • Mitterer, H. (2007). Behavior reflects the (degree of) reality of phonological features in the brain as well. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 127-130). Dudweiler: Pirrot.

    Abstract

    To assess the reality of phonological features in language processing (vs. language description), one needs to specify the distinctive claims of distinctive-feature theory. Two of the more farreaching claims are compositionality and generalizability. I will argue that there is some evidence for the first and evidence against the second claim from a recent behavioral paradigm. Highlighting the contribution of a behavioral paradigm also counterpoints the use of brain measures as the only way to elucidate what is "real for the brain". The contributions of the speakers exemplify how brain measures can help us to understand the reality of phonological features in language processing. The evidence is, however, not convincing for a) the claim for underspecification of phonological features—which has to deal with counterevidence from behavioral as well as brain measures—, and b) the claim of position independence of phonological features.
  • Mitterer, H., Brouwer, S., & Huettig, F. (2018). How important is prediction for understanding spontaneous speech? In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 26-40). Chennai: Macmillan Publishers India.
  • Mulder, K., Ten Bosch, L., & Boves, L. (2018). Analyzing EEG Signals in Auditory Speech Comprehension Using Temporal Response Functions and Generalized Additive Models. In Proceedings of Interspeech 2018 (pp. 1452-1456). doi:10.21437/Interspeech.2018-1676.

    Abstract

    Analyzing EEG signals recorded while participants are listening to continuous speech with the purpose of testing linguistic hypotheses is complicated by the fact that the signals simultaneously reflect exogenous acoustic excitation and endogenous linguistic processing. This makes it difficult to trace subtle differences that occur in mid-sentence position. We apply an analysis based on multivariate temporal response functions to uncover subtle mid-sentence effects. This approach is based on a per-stimulus estimate of the response of the neural system to speech input. Analyzing EEG signals predicted on the basis of the response functions might then bring to light conditionspecific differences in the filtered signals. We validate this approach by means of an analysis of EEG signals recorded with isolated word stimuli. Then, we apply the validated method to the analysis of the responses to the same words in the middle of meaningful sentences.
  • Munro, R., Bethard, S., Kuperman, V., Lai, V. T., Melnick, R., Potts, C., Schnoebelen, T., & Tily, H. (2010). Crowdsourcing and language studies: The new generation of linguistic data. In Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Proceedings of the Workshop (pp. 122-130). Stroudsburg, PA: Association for Computational Linguistics.
  • Naffah, N., Kempen, G., Rohmer, J., Steels, L., Tsichritzis, D., & White, G. (1985). Intelligent Workstation in the office: State of the art and future perspectives. In J. Roukens, & J. Renuart (Eds.), Esprit '84: Status report of ongoing work (pp. 365-378). Amsterdam: Elsevier Science Publishers.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (Eds.). (2007). The linguistic encoding of multiple-participant events [Special Issue]. Linguistics, 45(3).

    Abstract

    This issue investigates the linguistic encoding of events with three or more participants from the perspectives of language typology and acquisition. Such “multiple-participant events” include (but are not limited to) any scenario involving at least three participants, typically encoded using transactional verbs like 'give' and 'show', placement verbs like 'put', and benefactive and applicative constructions like 'do (something for someone)', among others. There is considerable crosslinguistic and withinlanguage variation in how the participants (the Agent, Causer, Theme, Goal, Recipient, or Experiencer) and the subevents involved in multipleparticipant situations are encoded, both at the lexical and the constructional levels
  • Nas, G., Kempen, G., & Hudson, P. (1984). De rol van spelling en klank bij woordherkenning tijdens het lezen. In A. Thomassen, L. Noordman, & P. Elling (Eds.), Het leesproces. Lisse: Swets & Zeitlinger.
  • Norcliffe, E. (2018). Egophoricity and evidentiality in Guambiano (Nam Trik). In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 305-345). Amsterdam: Benjamins.

    Abstract

    Egophoric verbal marking is a typological feature common to Barbacoan languages, but otherwise unknown in the Andean sphere. The verbal systems of three out of the four living Barbacoan languages, Cha’palaa, Tsafiki and Awa Pit, have previously been shown to express egophoric contrasts. The status of Guambiano has, however, remained uncertain. In this chapter, I show that there are in fact two layers of egophoric or egophoric-like marking visible in Guambiano’s grammar. Guambiano patterns with certain other (non-Barbacoan) languages in having ego-categories which function within a broader evidential system. It is additionally possible to detect what is possibly a more archaic layer of egophoric marking in Guambiano’s verbal system. This marking may be inherited from a common Barbacoan system, thus pointing to a potential genealogical basis for the egophoric patterning common to these languages. The multiple formal expressions of egophoricity apparent both within and across the four languages reveal how egophoric contrasts are susceptible to structural renewal, suggesting a pan-Barbacoan preoccupation with the linguistic encoding of self-knowledge.
  • Norcliffe, E., Enfield, N. J., Majid, A., & Levinson, S. C. (2010). The grammar of perception. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 7-16). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Norris, D., Cutler, A., McQueen, J. M., Butterfield, S., & Kearns, R. K. (2000). Language-universal constraints on the segmentation of English. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 43-46). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) [1] is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and a known boundary. The experiments examined cases where the residue was either a CV syllable with a lax vowel, or a CVC syllable with a schwa. Although neither syllable context is a possible word in English, word-spotting in both contexts was easier than with a context consisting of a single consonant. The PWC appears to be language-universal rather than language-specific.
  • Norris, D., Cutler, A., & McQueen, J. M. (2000). The optimal architecture for simulating spoken-word recognition. In C. Davis, T. Van Gelder, & R. Wales (Eds.), Cognitive Science in Australia, 2000: Proceedings of the Fifth Biennial Conference of the Australasian Cognitive Science Society. Adelaide: Causal Productions.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of subcategorical mismatch in word forms. The source of TRACE's failure lay not in interactive connectivity, not in the presence of inter-word competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model, which has inter-word competition, phonemic representations and continuous optimisation (but no interactive connectivity).
  • Omar, R., Henley, S. M., Hailstone, J. C., Sauter, D., Scott, S. K., Fox, N. C., Rossor, M. N., & Warren, J. D. (2007). Recognition of emotions in faces, voices and music in frontotemporal lobar regeneration [Abstract]. Journal of Neurology, Neurosurgery & Psychiatry, 78(9), 1014.

    Abstract

    Frontotemporal lobar degeneration (FTLD) is a group of neurodegenerative conditions characterised by focal frontal and/or temporal lobe atrophy. Patients develop a range of cognitive and behavioural abnormalities, including prominent difficulties in comprehending and expressing emotions, with significant clinical and social consequences. Here we report a systematic prospective analysis of emotion processing in different input modalities in patients with FTLD. We examined recognition of happiness, sadness, fear and anger in facial expressions, non-verbal vocalisations and music in patients with FTLD and in healthy age matched controls. The FTLD group was significantly impaired in all modalities compared with controls, and this effect was most marked for music. Analysing each emotion separately, recognition of negative emotions was impaired in all three modalities in FTLD, and this effect was most marked for fear and anger. Recognition of happiness was deficient only with music. Our findings support the idea that FTLD causes impaired recognition of emotions across input channels, consistent with a common central representation of emotion concepts. Music may be a sensitive probe of emotional deficits in FTLD, perhaps because it requires a more abstract representation of emotion than do animate stimuli such as faces and voices.
  • Oostdijk, N., Goedertier, W., Van Eynde, F., Boves, L., Martens, J.-P., Moortgat, M., & Baayen, R. H. (2002). Experiences from the Spoken Dutch Corpus Project. In Third international conference on language resources and evaluation (pp. 340-347). Paris: European Language Resources Association.
  • Otake, T., & Cutler, A. (2000). A set of Japanese word cohorts rated for relative familiarity. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 3 (pp. 766-769). Beijing: China Military Friendship Publish.

    Abstract

    A database is presented of relative familiarity ratings for 24 sets of Japanese words, each set comprising words overlapping in the initial portions. These ratings are useful for the generation of material sets for research in the recognition of spoken words.
  • Otake, T., McQueen, J. M., & Cutler, A. (2010). Competition in the perception of spoken Japanese words. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 114-117).

    Abstract

    Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors, recognition of the embedded word was more difficult than when such a sequence was compatible with few competitors. This clear effect of competition, established here for preceding context in Japanese, joins similar demonstrations, in other languages and for following contexts, to underline that the functional architecture of the human spoken-word recognition system is a universal one.
  • Otake, T., Davis, S. M., & Cutler, A. (1995). Listeners’ representations of within-word structure: A cross-linguistic and cross-dialectal investigation. In J. Pardo (Ed.), Proceedings of EUROSPEECH 95: Vol. 3 (pp. 1703-1706). Madrid: European Speech Communication Association.

    Abstract

    Japanese, British English and American English listeners were presented with spoken words in their native language, and asked to mark on a written transcript of each word the first natural division point in the word. The results showed clear and strong patterns of consensus, indicating that listeners have available to them conscious representations of within-word structure. Orthography did not play a strongly deciding role in the results. The patterns of response were at variance with results from on-line studies of speech segmentation, suggesting that the present task taps not those representations used in on-line listening, but levels of representation which may involve much richer knowledge of word-internal structure.
  • Ozyurek, A. (2007). Processing of multi-modal semantic information: Insights from cross-linguistic comparisons and neurophysiological recordings. In T. Sakamoto (Ed.), Communicating skills of intention (pp. 131-142). Tokyo: Hituzi Syobo Publishing.
  • Ozyurek, A. (2018). Cross-linguistic variation in children’s multimodal utterances. In M. Hickmann, E. Veneziano, & H. Jisa (Eds.), Sources of variation in first language acquisition: Languages, contexts, and learners (pp. 123-138). Amsterdam: Benjamins.

    Abstract

    Our ability to use language is multimodal and requires tight coordination between what is expressed in speech and in gesture, such as pointing or iconic gestures that convey semantic, syntactic and pragmatic information related to speakers’ messages. Interestingly, what is expressed in gesture and how it is coordinated with speech differs in speakers of different languages. This paper discusses recent findings on the development of children’s multimodal expressions taking cross-linguistic variation into account. Although some aspects of speech-gesture development show language-specificity from an early age, it might still take children until nine years of age to exhibit fully adult patterns of cross-linguistic variation. These findings reveal insights about how children coordinate different levels of representations given that their development is constrained by patterns that are specific to their languages.
  • Ozyurek, A. (2000). Differences in spatial conceptualization in Turkish and English discourse: Evidence from both speech and gesture. In A. Goksel, & C. Kerslake (Eds.), Studies on Turkish and Turkic languages (pp. 263-272). Wiesbaden: Harrassowitz.
  • Ozyurek, A., Kita, S., Allen, S., Furman, R., & Brown, A. (2007). How does linguistic framing of events influence co-speech gestures? Insights from crosslinguistic variations and similarities. In K. Liebal, C. Müller, & S. Pika (Eds.), Gestural communication in nonhuman and human primates (pp. 199-218). Amsterdam: Benjamins.

    Abstract

    What are the relations between linguistic encoding and gestural representations of events during online speaking? The few studies that have been conducted on this topic have yielded somewhat incompatible results with regard to whether and how gestural representations of events change with differences in the preferred semantic and syntactic encoding possibilities of languages. Here we provide large scale semantic, syntactic and temporal analyses of speech- gesture pairs that depict 10 different motion events from 20 Turkish and 20 English speakers. We find that the gestural representations of the same events differ across languages when they are encoded by different syntactic frames (i.e., verb-framed or satellite-framed). However, where there are similarities across languages, such as omission of a certain element of the event in the linguistic encoding, gestural representations also look similar and omit the same content. The results are discussed in terms of what gestures reveal about the influence of language specific encoding on on-line thinking patterns and the underlying interactions between speech and gesture during the speaking process.
  • Ozyurek, A., & Ozcaliskan, S. (2000). How do children learn to conflate manner and path in their speech and gestures? Differences in English and Turkish. In E. V. Clark (Ed.), The proceedings of the Thirtieth Child Language Research Forum (pp. 77-85). Stanford: CSLI Publications.
  • Ozyurek, A. (2002). Speech-gesture relationship across languages and in second language learners: Implications for spatial thinking and speaking. In B. Skarabela, S. Fish, & A. H. Do (Eds.), Proceedings of the 26th annual Boston University Conference on Language Development (pp. 500-509). Somerville, MA: Cascadilla Press.
  • Ozyurek, A. (2018). Role of gesture in language processing: Toward a unified account for production and comprehension. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), Oxford Handbook of Psycholinguistics (2nd ed., pp. 592-607). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198786825.013.25.

    Abstract

    Use of language in face-to-face context is multimodal. Production and perception of speech take place in the context of visual articulators such as lips, face, or hand gestures which convey relevant information to what is expressed in speech at different levels of language. While lips convey information at the phonological level, gestures contribute to semantic, pragmatic, and syntactic information, as well as to discourse cohesion. This chapter overviews recent findings showing that speech and gesture (e.g. a drinking gesture as someone says, “Would you like a drink?”) interact during production and comprehension of language at the behavioral, cognitive, and neural levels. Implications of these findings for current psycholinguistic theories and how they can be expanded to consider the multimodal context of language processing are discussed.
  • Ozyurek, A. (2010). The role of iconic gestures in production and comprehension of language: Evidence from brain and behavior. In S. Kopp, & I. Wachsmuth (Eds.), Gesture in embodied communication and human-computer interaction: 8th International Gesture Workshop, GW 2009, Bielefeld, Germany, February 25-27 2009. Revised selected papers (pp. 1-10). Berlin: Springer.
  • Ozyurek, A. (2000). The influence of addressee location on spatial language and representational gestures of direction. In D. McNeill (Ed.), Language and gesture (pp. 64-83). Cambridge: Cambridge University Press.

Share this page