Displaying 1 - 14 of 14
Azar, Z., Backus, A., & Ozyurek, A. (2016). Pragmatic relativity: Gender and context affect the use of personal pronouns in discourse differentially across languages. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (
Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1295-1300). Austin, TX: Cognitive Science Society.
AbstractSpeakers use differential referring expressions in pragmatically appropriate ways to produce coherent narratives. Languages, however, differ in a) whether REs as arguments can be dropped and b) whether personal pronouns encode gender. We examine two languages that differ from each other in these two aspects and ask whether the co-reference context and the gender encoding options affect the use of REs differentially. We elicited narratives from Dutch and Turkish speakers about two types of three-person events, one including people of the same and the other of mixed-gender. Speakers re-introduced referents into the discourse with fuller forms (NPs) and maintained them with reduced forms (overt or null pronoun). Turkish speakers used pronouns mainly to mark emphasis and only Dutch speakers used pronouns differentially across the two types of videos. We argue that linguistic possibilities available in languages tune speakers into taking different principles into account to produce pragmatically coherent narratives
Ortega, G., & Ozyurek, A. (2016). Generalisable patterns of gesture distinguish semantic categories in communication without language. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (
Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1182-1187). Austin, TX: Cognitive Science Society.
AbstractThere is a long-standing assumption that gestural forms are geared by a set of modes of representation (acting, representing, drawing, moulding) with each technique expressing speakers’ focus of attention on specific aspects of referents (Müller, 2013). Beyond different taxonomies describing the modes of representation, it remains unclear what factors motivate certain depicting techniques over others. Results from a pantomime generation task show that pantomimes are not entirely idiosyncratic but rather follow generalisable patterns constrained by their semantic category. We show that a) specific modes of representations are preferred for certain objects (acting for manipulable objects and drawing for non-manipulable objects); and b) that use and ordering of deictics and modes of representation operate in tandem to distinguish between semantically related concepts (e.g., “to drink” vs “mug”). This study provides yet more evidence that our ability to communicate through silent gesture reveals systematic ways to describe events and objects around us
Sumer, B., & Ozyurek, A. (2016). İşitme Engelli Çocukların Dil Edinimi [Sign language acquisition by deaf children]. In C. Aydin, T. Goksun, A. Kuntay, & D. Tahiroglu (
Eds.), Aklın Çocuk Hali: Zihin Gelişimi Araştırmaları [Research on Cognitive Development] (pp. 365-388). Istanbul: Koc University Press.
Sumer, B., Perniss, P. M., & Ozyurek, A. (2016). Viewpoint preferences in signing children's spatial descriptions. In J. Scott, & D. Waughtal (
Eds.), Proceedings of the 40th Annual Boston University Conference on Language Development (BUCLD 40) (pp. 360-374). Boston, MA: Cascadilla Press.
Sumer, B., Zwitserlood, I., Perniss, P., & Ozyurek, A. (2016). Yer Bildiren İfadelerin Türkçe ve Türk İşaret Dili’nde (TİD) Çocuklar Tarafından Edinimi [The acqusition of spatial relations by children in Turkish and Turkish Sign Language (TID)]. In E. Arik (
Ed.), Ellerle Konuşmak: Türk İşaret Dili Araştırmaları [Speaking with hands: Studies on Turkish Sign Language] (pp. 157-182). Istanbul: Koç University Press.
Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal Gyrus in the integration of point- ing gestures and speech. In G. Ferré, & M. Tutton (
Eds.), Proceedings of the4th GESPIN - Gesture & Speech in Interaction Conference. Nantes: Université de Nantes.
AbstractComprehension of pointing gestures is fundamental to human communication. However, the neural mechanisms that subserve the integration of pointing gestures and speech in visual contexts in comprehension are unclear. Here we present the results of an fMRI study in which participants watched images of an actor pointing at an object while they listened to her referential speech. The use of a mismatch paradigm revealed that the semantic unication of pointing gesture and speech in a triadic context recruits left inferior frontal gyrus. Complementing previous ndings, this suggests that left inferior frontal gyrus semantically integrates information across modalities and semiotic domains.
Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. In G. Ferré, & M. Tutton (
Eds.), Proceedings of the 4th GESPIN - Gesture & Speech in Interaction Conference (pp. 211-216). Nantes: Université of Nantes.
AbstractSpeakers can adapt their speech and co-speech gestures for addressees. Here, we investigate whether this ability is modulated by age. Younger and older adults participated in a comic narration task in which one participant (the speaker) narrated six short comic stories to another participant (the addressee). One half of each story was known to both participants, the other half only to the speaker. Younger but not older speakers used more words and gestures when narrating novel story content as opposed to known content. We discuss cognitive and pragmatic explanations of these findings and relate them to theories of gesture production.
Emmorey, K., & Ozyurek, A. (2014). Language in our hands: Neural underpinnings of sign language and co-speech gesture. In M. S. Gazzaniga, & G. R. Mangun (
Eds.), The cognitive neurosciences (5th ed., pp. 657-666). Cambridge, Mass: MIT Press.
Ortega, G., Sumer, B., & Ozyurek, A. (2014). Type of iconicity matters: Bias for action-based signs in sign language acquisition. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1114-1119). Austin, Tx: Cognitive Science Society.
AbstractEarly studies investigating sign language acquisition claimed that signs whose structures are motivated by the form of their referent (iconic) are not favoured in language development. However, recent work has shown that the first signs in deaf children’s lexicon are iconic. In this paper we go a step further and ask whether different types of iconicity modulate learning sign-referent links. Results from a picture description task indicate that children and adults used signs with two possible variants differentially. While children signing to adults favoured variants that map onto actions associated with a referent (action signs), adults signing to another adult produced variants that map onto objects’ perceptual features (perceptual signs). Parents interacting with children used more action variants than signers in adult-adult interactions. These results are in line with claims that language development is tightly linked to motor experience and that iconicity can be a communicative strategy in parental input.
Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1144-1149). Austin, Tx: Cognitive Science Society.
Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.
AbstractDevelopmental studies show that it takes longer for children learning spoken languages to acquire viewpointdependent spatial relations (e.g., left-right, front-behind), compared to ones that are not viewpoint-dependent (e.g., in, on, under). The current study investigates how children learn to express viewpoint-dependent relations in a sign language where depicted spatial relations can be communicated in an analogue manner in the space in front of the body or by using body-anchored signs (e.g., tapping the right and left hand/arm to mean left and right). Our results indicate that the visual-spatial modality might have a facilitating effect on learning to express these spatial relations (especially in encoding of left-right) in a sign language (i.e., Turkish Sign Language) compared to a spoken language (i.e., Turkish).
Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. In N. Miyake, D. Peebles, & R. P. Cooper (
Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 467-472). Austin, TX: Cognitive Society. Retrieved from http://mindmodeling.org/cogsci2012/papers/0092/index.html.
AbstractCo-speech gestures are an integral part of human face-to-face communication, but little is known about how pragmatic factors influence our comprehension of those gestures. The present study investigates how different types of recipients process iconic gestures in a triadic communicative situation. Participants (N = 32) took on the role of one of two recipients in a triad and were presented with 160 video clips of an actor speaking, or speaking and gesturing. Crucially, the actor’s eye gaze was manipulated in that she alternated her gaze between the two recipients. Participants thus perceived some messages in the role of addressed recipient and some in the role of unaddressed recipient. In these roles, participants were asked to make judgements concerning the speaker’s messages. Their reaction times showed that unaddressed recipients did comprehend speaker’s gestures differently to addressees. The findings are discussed with respect to automatic and controlled processes involved in gesture comprehension.
Ozyurek, A. (2012). Gesture. In R. Pfau, M. Steinbach, & B. Woll (
Eds.), Sign language: An international handbook (pp. 626-646). Berlin: Mouton.
AbstractGestures are meaningful movements of the body, the hands, and the face during communication, which accompany the production of both spoken and signed utterances. Recent research has shown that gestures are an integral part of language and that they contribute semantic, syntactic, and pragmatic information to the linguistic utterance. Furthermore, they reveal internal representations of the language user during communication in ways that might not be encoded in the verbal part of the utterance. Firstly, this chapter summarizes research on the role of gesture in spoken languages. Subsequently, it gives an overview of how gestural components might manifest themselves in sign languages, that is, in a situation in which both gesture and sign are expressed by the same articulators. Current studies are discussed that address the question of whether gestural components are the same or different in the two language modalities from a semiotic as well as from a cognitive and processing viewpoint. Understanding the role of gesture in both sign and spoken language contributes to our knowledge of the human language faculty as a multimodal communication system.
Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2012). Development of locative expressions by Turkish deaf and hearing children: Are there modality effects? In A. K. Biller, E. Y. Chung, & A. E. Kimball (
Eds.), Proceedings of the 36th Annual Boston University Conference on Language Development (BUCLD 36) (pp. 568-580). Boston: Cascadilla Press.