Publications

Displaying 401 - 408 of 408
  • Wilkins, D., Pederson, E., & Levinson, S. C. (1995). Background questions for the "enter"/"exit" research. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 14-16). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003935.

    Abstract

    How do languages encode different kinds of movement, and what features do people pay attention to when describing motion events? This document outlines topics concerning the investigation of “enter” and “exit” events. It helps contextualise research tasks that examine this domain (see 'Motion Elicitation' and 'Enter/Exit animation') and gives some pointers about what other questions can be explored.
  • Wilkins, D. (1995). Motion elicitation: "moving 'in(to)'" and "moving 'out (of)'". In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 4-12). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003391.

    Abstract

    How do languages encode different kinds of movement, and what features do people pay attention to when describing motion events? This task investigates the expression of “enter” and “exit” activities, that is, events involving motion in(to) and motion out (of) container-like items. The researcher first uses particular stimuli (a ball, a cup, rice, etc.) to elicit descriptions of enter/exit events from one consultant, and then asks another consultant to demonstrate the event based on these descriptions. See also the related entries Enter/Exit Animation and Background Questions for Enter/Exit Research.
  • Wittenburg, P., Trilsbeek, P., & Wittenburg, F. (2014). Corpus archiving and dissemination. In J. Durand, U. Gut, & G. Kristoffersen (Eds.), The Oxford Handbook of Corpus Phonology (pp. 133-149). Oxford: Oxford University Press.
  • Zeshan, U. (2004). Basic English course taught in Indian Sign Language (Ali Yavar Young National Institute for Hearing Handicapped, Ed.). National Institute for the Hearing Handicapped: Mumbai.
  • Zhang, Y., Chen, C.-h., & Yu, C. (2019). Mechanisms of cross-situational learning: Behavioral and computational evidence. In Advances in Child Development and Behavior; vol. 56 (pp. 37-63).

    Abstract

    Word learning happens in everyday contexts with many words and many potential referents for those words in view at the same time. It is challenging for young learners to find the correct referent upon hearing an unknown word at the moment. This problem of referential uncertainty has been deemed as the crux of early word learning (Quine, 1960). Recent empirical and computational studies have found support for a statistical solution to the problem termed cross-situational learning. Cross-situational learning allows learners to acquire word meanings across multiple exposures, despite each individual exposure is referentially uncertain. Recent empirical research shows that infants, children and adults rely on cross-situational learning to learn new words (Smith & Yu, 2008; Suanda, Mugwanya, & Namy, 2014; Yu & Smith, 2007). However, researchers have found evidence supporting two very different theoretical accounts of learning mechanisms: Hypothesis Testing (Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005; Markman, 1992) and Associative Learning (Frank, Goodman, & Tenenbaum, 2009; Yu & Smith, 2007). Hypothesis Testing is generally characterized as a form of learning in which a coherent hypothesis regarding a specific word-object mapping is formed often in conceptually constrained ways. The hypothesis will then be either accepted or rejected with additional evidence. However, proponents of the Associative Learning framework often characterize learning as aggregating information over time through implicit associative mechanisms. A learner acquires the meaning of a word when the association between the word and the referent becomes relatively strong. In this chapter, we consider these two psychological theories in the context of cross-situational word-referent learning. By reviewing recent empirical and cognitive modeling studies, our goal is to deepen our understanding of the underlying word learning mechanisms by examining and comparing the two theoretical learning accounts.
  • Zuidema, W., & Fitz, H. (2019). Key issues and future directions: Models of human language and speech processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 353-358). Cambridge, MA: MIT Press.
  • Zwitserlood, I. (2014). Meaning at the feature level in sign languages. The case of name signs in Sign Language of the Netherlands (NGT). In R. Kager (Ed.), Where the Principles Fail. A Festschrift for Wim Zonneveld on the occasion of his 64th birthday (pp. 241-251). Utrecht: Utrecht Institute of Linguistics OTS.
  • Zwitserlood, I. (2003). Word formation below and above little x: Evidence from Sign Language of the Netherlands. In Proceedings of SCL 19. Nordlyd Tromsø University Working Papers on Language and Linguistics (pp. 488-502).

    Abstract

    Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT), with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM), which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.

Share this page