Displaying 1 - 7 of 7
Dingemanse, M., Blythe, J., & Dirksmeyer, T. (2018). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. In I. Nikolaeva (
Ed.), Linguistic Typology: Critical Concepts in Linguistics. Vol. 4 (pp. 322-357). London: Routledge.
AbstractIn conversation, people regularly deal with problems of speaking, hearing, and understanding. We report on a cross-linguistic investigation of the conversational structure of other-initiated repair (also known as collaborative repair, feedback, requests for clarification, or grounding sequences). We take stock of formats for initiating repair across languages (comparable to English huh?, who?, y’mean X?, etc.) and find that different languages make available a wide but remarkably similar range of linguistic resources for this function. We exploit the patterned variation as evidence for several underlying concerns addressed by repair initiation: characterising trouble, managing responsibility, and handling knowledge. The concerns do not always point in the same direction and thus provide participants in interaction with alternative principles for selecting one format over possible others. By comparing conversational structures across languages, this paper contributes to pragmatic typology: the typology of systems of language use and the principles that shape them.
Ozyurek, A. (2018). Cross-linguistic variation in children’s multimodal utterances. In M. Hickmann, E. Veneziano, & H. Jisa (
Eds.), Sources of variation in first language acquisition: Languages, contexts, and learners (pp. 123-138). Amsterdam: Benjamins.
AbstractOur ability to use language is multimodal and requires tight coordination between what is expressed in speech and in gesture, such as pointing or iconic gestures that convey semantic, syntactic and pragmatic information related to speakers’ messages. Interestingly, what is expressed in gesture and how it is coordinated with speech differs in speakers of different languages. This paper discusses recent findings on the development of children’s multimodal expressions taking cross-linguistic variation into account. Although some aspects of speech-gesture development show language-specificity from an early age, it might still take children until nine years of age to exhibit fully adult patterns of cross-linguistic variation. These findings reveal insights about how children coordinate different levels of representations given that their development is constrained by patterns that are specific to their languages.
Ozyurek, A. (2018). Role of gesture in language processing: Toward a unified account for production and comprehension. In S.-A. Rueschemeyer, & M. G. Gaskell (
Eds.), Oxford Handbook of Psycholinguistics (2nd ed., pp. 592-607). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198786825.013.25.
AbstractUse of language in face-to-face context is multimodal. Production and perception of speech take place in the context of visual articulators such as lips, face, or hand gestures which convey relevant information to what is expressed in speech at different levels of language. While lips convey information at the phonological level, gestures contribute to semantic, pragmatic, and syntactic information, as well as to discourse cohesion. This chapter overviews recent findings showing that speech and gesture (e.g. a drinking gesture as someone says, “Would you like a drink?”) interact during production and comprehension of language at the behavioral, cognitive, and neural levels. Implications of these findings for current psycholinguistic theories and how they can be expanded to consider the multimodal context of language processing are discussed.
Ozyurek, A. (2017). Function and processing of gesture in the context of language. In R. B. Church, M. W. Alibali, & S. D. Kelly (
Eds.), Why gesture? How the hands function in speaking, thinking and communicating (pp. 39-58). Amsterdam: John Benjamins Publishing.
AbstractMost research focuses function of gesture independent of its link to the speech it accompanies and the coexpressive functions it has together with speech. This chapter instead approaches gesture in relation to its communicative function in relation to speech, and demonstrates how it is shaped by the linguistic encoding of a speaker’s message. Drawing on crosslinguistic research with adults and children as well as bilinguals on iconic/pointing gesture production it shows that the specific language speakers use modulates the rate and the shape of the iconic gesture production of the same events. The findings challenge the claims aiming to understand gesture’s function for “thinking only” in adults and during development.
Sumer, B., Perniss, P. M., & Ozyurek, A. (2017). A first study on the development of spatial viewpoint in sign language acquisition: The case of Turkish Sign Language. In F. N. Ketrez, A. C. Kuntay, S. Ozcalıskan, & A. Ozyurek (
Eds.), Social Environment and Cognition in Language Development: Studies in Honor of Ayhan Aksu-Koc (pp. 223-240). Amsterdam: John Benjamins. doi:10.1075/tilar.21.14sum.
AbstractThe current study examines, for the first time, the viewpoint preferences of signing children in expressing spatial relations that require imposing a viewpoint (left-right, front-behind). We elicited spatial descriptions from deaf children (4–9 years of age) acquiring Turkish Sign Language (TİD) natively from their deaf parents and from adult native signers of TİD. Adults produced these spatial descriptions from their own viewpoint and from that of their addressee depending on whether the objects were located on the lateral or the sagittal axis. TİD-acquiring children, on the other hand, described all spatial configurations from their own viewpoint. Differences were also found between children and adults in the type of linguistic devices and how they are used to express such spatial relations.
Emmorey, K., & Ozyurek, A. (2014). Language in our hands: Neural underpinnings of sign language and co-speech gesture. In M. S. Gazzaniga, & G. R. Mangun (
Eds.), The cognitive neurosciences (5th ed., pp. 657-666). Cambridge, Mass: MIT Press.
Ozyurek, A. (2012). Gesture. In R. Pfau, M. Steinbach, & B. Woll (
Eds.), Sign language: An international handbook (pp. 626-646). Berlin: Mouton.
AbstractGestures are meaningful movements of the body, the hands, and the face during communication, which accompany the production of both spoken and signed utterances. Recent research has shown that gestures are an integral part of language and that they contribute semantic, syntactic, and pragmatic information to the linguistic utterance. Furthermore, they reveal internal representations of the language user during communication in ways that might not be encoded in the verbal part of the utterance. Firstly, this chapter summarizes research on the role of gesture in spoken languages. Subsequently, it gives an overview of how gestural components might manifest themselves in sign languages, that is, in a situation in which both gesture and sign are expressed by the same articulators. Current studies are discussed that address the question of whether gestural components are the same or different in the two language modalities from a semiotic as well as from a cognitive and processing viewpoint. Understanding the role of gesture in both sign and spoken language contributes to our knowledge of the human language faculty as a multimodal communication system.