Language in the visual modality: Co-speech Gesture and Sign
As humans, our ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures used in spoken languages. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression Co-speech gestures, though non-linguistic, are produced and perceived in tight semantic and temporal integration with speech. Thus, languageāin its primary face-to-face context (both phylogenetically and ontogenetically) is a multimodal phenomenon. In fact visual modality seems to be a more common way of communication than speech -when we consider both deaf and hearing individuals. Most research in language, however, has focused mostly on spoken/written language and has rarely considered the visual context it is embedded in to understand our linguistic capacity. This talk give a brief review on what know so far about what the visual expressive resources of language look like in both spoken and sign languages and their role in communication and cognition- broadening our scope of language. We will argue, based on these recent findings, that our models of language need to take visual modes of communication into account and provide a unified framework for how semiotic and expressive resources of the visual modality are recruited both for spoken and sign languages and their consequences for processing-also considering their neural underpinnings
Publication type
TalkPublication date
2016
Share this page