Asli Ozyurek

Presentations

Displaying 1 - 77 of 77
  • Akamine, S., Dingemanse, M., Meyer, A. S., & Ozyurek, A. (2023). Contextual influences on multimodal alignment in Zoom interaction. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
  • Ariño-Bizarro, A., Özyürek, A., & Ibarretxe-Antuñano, I. (2023). What do gestures reveal about the coding of causality in Spanish?. Talk presented at the 8th Gesture and Speech in Interaction (GESPIN 2023). Nijmegen, The Netherlands. 2023-09-13 - 2023-09-15.
  • Mamus, E., Speed, L. J., Ortega, G., Majid, A., & Ozyurek, A. (2023). Differences in gestural representations of concepts in blind and sighted individuals. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
  • Özyürek, A. (2023). Multimodality as a design feature of human language: Insights from brain, behavior and diversity [keynote]. Talk presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023). Marseille, France. 2023-10-24 - 2023-10-26.
  • Slonimska, A., Özyürek, A., & Capirci, O. (2023). Communicative efficiency in sign languages: The role of the visual modality-specific properties. Talk presented at the 16th International Cognitive Linguistics Conference (ICLC 16). Düsseldorf, Germany. 2023-08-07 - 2023-08-11.
  • Kan, U., Gökgöz, K., Sumer, B., Tamyürek, E., & Özyürek, A. (2022). Emergence of negation in a Turkish homesign system: Insights from the family context. Talk presented at the Joint Conference on Language Evolution (JCoLE). Kanazawa, Japan. 2022-09-05 - 2022-09-08.
  • Karadöller, D. Z., Manhardt, F., Peeters, D., Özyürek, A., & Ortega, G. (2022). Beyond cognates: Both iconicity and gestures pave the way for speakers in learning signs in L2 at first exposure. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Karadöller, D. Z., Manhardt, F., Peeters, D., Özyürek, A., & Ortega, G. (2022). Beyond cognates: Both iconicity and gestures pave the way for speakers in learning signs in L2 at first exposure. Talk presented at the International Conference on Sign Language Acqusition (ICSLA 4). online. 2022-06-23 - 2022-06-25.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2022). Relationship between spatial language experience and spatial memory: Evidence from deaf children with late sign language exposure. Talk presented at the International Conference on Sign Language Acqusition (ICSLA 4). online. 2022-06-23 - 2022-06-25.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2022). Geç işaret dilini ediniminin uzamsal dil ve bellek ilişkisine etkileri [Effect of late sign language acquisition on the relationship between spatial language and memory]. Talk presented at 3. Gelişim Psikolojisi Sempozyumu [3rd Symposium on Developmental Psychology]. Istanbul, Turkey. 2022-07-08 - 2022-07-09.
  • Mamus, E., Speed, L., Özyürek, A., & Majid, A. (2022). Sensory modality influences the encoding of motion events in speech but not co-speech gestures. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Mamus, E., Speed, L., Rissman, L., Majid, A., & Özyürek, A. (2022). Visual experience affects motion event descriptions in speech and gesture. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Özyürek, A., Ünal, E., Manhardt, F., & Brouwer, S. (2022). Modality specific differences in speech, gesture and sign modulate visual attention differentially during message preparation. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Özyürek, A. (2022). Multimodality as design feature of human language capacity [keynote]. Talk presented at Institute on Multimodality 2022: Minds, Media, Technology. Bielefeld, Germany. 2022-08-28 - 2022-09-06.
  • Sekine, K., & Özyürek, A. (2022). Gestures give a hand to children's understanding of degraded speech. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Slonimska, A., Özyürek, A., & Capirci, O. (2022). Simultaneity as an emergent property of sign languages. Talk presented at the Joint Conference on Language Evolution (JCoLE). Kanazawa, Japan. 2022-09-05 - 2022-09-08.
  • Sumer, B., & Özyürek, A. (2022). Language use in deaf children with early-signing versus late-signing deaf parents. Talk presented at the International Conference on Sign Language Acqusition (ICSLA 4). online. 2022-06-23 - 2022-06-25.
  • Karadöller, D. Z., Sumer, B., Ozyurek, A., & Ünal, E. (2021). Producing informative expressions of Left-Right relations: Differences between children and adults in using multimodal encoding strategies. Talk presented at the 15th International Congress for the Study of Child Language (IASCL 2021). online. 2021-07-15 - 2021-07-23.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2021). Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture. Talk presented at the 43rd Annual Meeting of the Cognitive Science Society (CogSci 2021). online. 2021-07-24 - 2021-07-26.
  • Mamus, E., Speed, L. J., Ozyurek, A., & Majid, A. (2021). Sensory modality of input influences encoding of motion events in speech but not co-speech gestures. Talk presented at the 43rd Annual Meeting of the Cognitive Science Society (CogSci 2021). online. 2021-07-26 - 2021-07-29.
  • Ozyurek, A. (2021). Not only the past but also the future of language is likely to be multimodal [plenary talk]. Talk presented at Protolang 7. (virtual conference). 2021-09-06 - 2021-09-08.
  • Ozyurek, A. (2021). Multimodal approaches to cross-linguistic differences in language structures, processing and acquisition [keynote]. Talk presented at Crosslinguistic Perspectives on Processing and Learning (X-PPL 2021). online. 2021-09-16 - 2021-09-17.
  • Rasenberg, M., Ozyurek, A., Pouw, W., & Dingemanse, M. (2021). The use of multimodal resources for joint meaning-making in conversational repair sequences. Talk presented at the Embodied Cognitive Science (ECogS) Seminar Series. Virtual meeting. 2021-12-10.
  • Rasenberg, M., Ozyurek, A., & Dingemanse, M. (2021). The use of multimodal resources for joint meaning-making in conversational repair sequences. Talk presented at the 5th International Conference on Interactivity, Language & Cognition. Virtual meeting. 2021-09-15 - 2021-09-19.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2020). Sign advantage for children: Signing children’s spatial expressions are more informative than speaking children’s speech and gestures combined. Talk presented at the 45th Annual Boston University (Virtual) Conference on Language Development (BUCLD 45). Boston, MA, USA. 2020-11-05 - 2020-11-08.
  • Özer, D., Karadöller, D. Z., Türkmen, I., Ozyurek, A., & Göksun, T. (2020). Informativeness of gestures in speech context guides visual attention during comprehension of spatial language. Talk presented at the 7th Gesture and Speech Interaction (GESPIN 2020). Stockholm, Sweden. 2020-09-07 - 2020-09-09.
  • Ozyurek, A. (2020). From hands to brains: How does human body talk, think and interact in face-to-face language use? [keynote]. Talk presented at the 22nd ACM International (Virtual) Conference on Multimodal Interaction (ICMI 2020). Utrecht, The Netherlands. 2020-10-25 - 2020-10-29.
  • Manhardt, F., Brouwer, S., & Ozyurek, A. (2019). Cross-modal transfer in bimodal bilinguals: Implications for a multimodal language production model. Talk presented at the Language Division colloquium. Radboud University, Nijmegen, The Netherlands. 2019-03.
  • Manhardt, F., Brouwer, S., & Ozyurek, A. (2019). Sign influences spatial encoding in speech in bimodal bilinguals. Talk presented at the 13th conference of Theoretical Issues in Sign Language Research (TISLR 13). Hamburg, Germany. 2019-09-26 - 2019-09-28.
  • Rasenberg, M., Dingemanse, M., & Ozyurek, A. (2019). Lexical and gestural alignment in collaborative referring. Talk presented at the 6th European and 9th Nordic Symposium on Multimodal Communication (MMSYM). leuven, Belgium. 2019-09-09 - 2019-09-10.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2019). The role of iconicity and simultaneity for efficient information encoding in signed languages: A case of Italian Sign Language (LIS). Talk presented at the 17th workshop on Iconicity (ILL17). Lund, Sweden. 2019-05-03.
  • Blokpoel, M., Dingemanse, M., Kachergis, G., Bögels, S., Drijvers, L., Eijk, L., Ernestus, M., De Haas, N., Holler, J., Levinson, S. C., Lui, R., Milivojevic, B., Neville, D., Ozyurek, A., Rasenberg, M., Schriefers, H., Trujillo, J. P., Winner, T., Toni, I., & Van Rooij, I. (2018). Ambiguity helps higher-order pragmatic reasoners communicate. Talk presented at the 14th biannual conference of the German Society for Cognitive Science, GK (KOGWIS 2018). Darmstadt, Germany. 2018-09-03 - 2018-09-06.
  • Capirci, O., Slonimska, A., & Ozyurek, A. (2018). Constructed representation of transitive actions in Italian Sign Language: Agent’s or patient’s perspective?. Talk presented at the Sign-Café workshop. Bimingham, UK. 2018-07-30.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2018). Delayed sign language acquisition: How does it impact spatial language use?. Talk presented at Sign Pop Up Meetings 2018. Nijmegen, The Netherlands. 2018-04-03 - 2018-02-28.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2018). Delayed sign language acquisition: How does it impact spatial language use?. Talk presented at the Center for Language Colloquium Series 2018. Nijmegen, The Netherlands. 2018-03-14 - 2018-02-28.
  • Manhardt, F., Brouwer, S., Sumer, B., & Ozyurek, A. (2018). Iconicity matters: Signers and speakers view spatial relations differently prior to linguistic production. Talk presented at the seventh meeting of the Formal and Experimental Advances in Sign language Theory (FEAST 2018). Venice, Italy. 2018-06-18 - 2018-06-20.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2018). L’uso della simultaneità per trasmettere messaggi densi di informaizoni in lingua dei segni italiana (LIS). Talk presented at the 4° Convegno Nazionale LIS. Rome, Italy. 2018-11-10.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2018). Simultaneous information encoding in Italian Sign Language. Talk presented at the fifth Attentive Listener in the Visual World (AttLis 2018). Trondheim, Norway. 2018-08-29 - 2018-08-30.
  • Azar, Z., Backus, A., & Ozyurek, A. (2017). Bidirectional contact effects in proficient heritage speakers: Subject reference in Turkish and Dutch. Talk presented at the 11th International Symposium on Bilingualism (ISB11). University of Limerick, Limerick, Ireland. 2017-06-11 - 2017-06-15.
  • Azar, Z., Backus, A., & Ozyurek, A. (2017). Highly proficient bilinguals maintain language-specific pragmatic constraints on pronouns: Evidence from speech and gesture. Talk presented at the 39th Annual Conference of the Cognitive Science Society (CogSci 2017). London, UK. 2017-07-26 - 2017-07-29.

    Abstract

    The use of subject pronouns by bilingual speakers using both a pro-drop and a non-pro-drop language (e.g. Spanish heritage speakers in the USA) is a well-studied topic in research on cross-linguistic influence in language contact situations. Previous studies looking at bilinguals with different proficiency levels have yielded conflicting results on whether there is transfer from the non-pro-drop patterns to the pro-drop language. Additionally, previous research has focused on speech patterns only. In this paper, we study the two modalities of language, speech and gesture, and ask whether and how they reveal cross-linguistic influence on the use of subject pronouns in discourse. We focus on elicited narratives from heritage speakers of Turkish in the Netherlands, in both Turkish (pro-drop) and Dutch (non-pro-drop), as well as from monolingual control groups. The use of pronouns was not very common in monolingual Turkish narratives and was constrained by the pragmatic contexts, unlike in Dutch. Furthermore, Turkish pronouns were more likely to be accompanied by localized gestures than Dutch pronouns, presumably because pronouns in Turkish are pragmatically marked forms. We did not find any cross-linguistic influence in bilingual speech or gesture patterns, in line with studies (speech only) of highly proficient bilinguals. We therefore suggest that speech and gesture parallel each other not only in monolingual but also in bilingual production. Highly proficient heritage speakers who have been exposed to diverse linguistic and gestural patterns of each language from early on maintain monolingual patterns of pragmatic constraints on the use of pronouns multimodally.
  • Azar, Z., Backus, A., & Ozyurek, A. (2017). Reference tracking in Turkish and Dutch narratives: Effect of co-reference context and gender on the choice of referring expressions. Talk presented at the Grammar and Cognition Colloquium. Radboud University, Nijmegen, The Netherlands. 2017-05-12.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2017). Effects of delayed sign language exposure on spatial language acquisition. Talk presented at the Spatial Language and Spatial Cognition Workshop. Trondheim, Norway. 2017-12-06 - 2017-12-07.
  • Azar, Z., Backus, A., & Ozyurek, A. (2016). Influence of culture and language on bilinguals’ speech and gesture: Evidence from Turkish-Dutch bilinguals. Talk presented at CLS Lunch Colloquium. Radboud University, Nijmegen, The Netherlands. 2016-04-19.
  • Azar, Z., Backus, A., & Ozyurek, A. (2016). Pragmatic relativity: Gender and context affect the use of personal pronouns in discourse differentially across languages. Talk presented at the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Philadelphia, PA, US. 2016-08-11 - 2016-08-13.

    Abstract

    Speakers use differential referring expressions in pragmatically appropriate ways to produce coherent narratives. Languages, however, differ in a) whether REs as arguments can be dropped and b) whether personal pronouns encode gender. We examine two languages that differ from each other in these two aspects and ask whether the co-reference context and the gender encoding options affect the use of REs differentially. We elicited narratives from Dutch and Turkish speakers about two types of three-person events, one including people of the same and the other of mixed-gender. Speakers re-introduced referents into the discourse with fuller forms (NPs) and maintained them with reduced forms (overt or null pronoun). Turkish speakers used pronouns mainly to mark emphasis and only Dutch speakers used pronouns differentially across the two types of videos. We argue that linguistic possibilities available in languages tune speakers into taking different principles into account to produce pragmatically coherent narratives
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Talk presented at Sensorimotor Speech Processing Symposium. London, UK. 2016-08-16.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraged speech comprehension engages the language network, motor cortex and visual cortex. Talk presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC). Nijmegen, The Netherlands. 2016-10-31 - 2016-11-01.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Oscillatory and temporal dynamics show engagement of the language network, motor system and visual cortex during gestural enhancement of degraded speech. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-23 - 2016-11-24.
  • Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do iconic gestures and lip movements contribute to degraded speech comprehension?. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-22 - 2016-07-24.
  • Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do iconic gestures and lip movements contribute to degraded speech comprehension?. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.

    Abstract

    Natural, face-to-face communication consists of an audiovisual binding that integrates speech and visual information, such as iconic co-speech gestures and lip movements. Especially in adverse listening conditions such as in noise, this visual information can enhance speech comprehension. However, the contribution of lip movements and iconic gestures to understanding speech in noise has been mostly studied separately. Here, we investigated the contribution of iconic gestures and lip movements to degraded speech comprehension in a joint context. In a free-recall task, participants watched short videos of an actress uttering an action verb. This verb could be presented in clear speech, severely degraded speech (2-band noise-vocoding) or moderately degraded speech (6-band noise-vocoding), and could view the actress with her lips blocked, with her lips visible, or with her lips visible and making an iconic co-speech gesture. Additionally, we presented these clips without audio and with just the lip movements present, or with just lip movements and gestures present, to investigate how much information listeners could get from visual input alone. Our results reveal that when listeners perceive degraded speech in a visual context, listeners benefit more from gestural information than from just lip movements alone. This benefit is larger at moderate noise levels where auditory cues are still moderately reliable than compared to severe noise levels where auditory cues are no longer reliable. As a result, listeners are only able to benefit from this additive effect of ‘double’ multimodal enhancement of iconic gestures and lip movements when there are enough auditory cues present to map lip movements to the phonological information in the speech signal
  • Ortega, G., & Ozyurek, A. (2016). Generalisable patterns of gesture distinguish semantic categories in communication without language: Evidence from pantomime. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.

    Abstract

    There is a long-standing assumption that gestural forms are geared by a set of modes of representation (acting, representing, drawing, moulding) with each technique expressing speakers’ focus of attention on specific aspects of a referent (Müller, 2013). However, it is just recently that the relationship between gestural forms and mode of representation has been linked to 1) the semantic categories they represent (i.e., objects, actions) and 2) the affordances of the referents. Here we investigate these relations when speakers are asked to communicate about different types of referents in pantomime. This mode of communication has revealed generalisable ordering of constituents of events across speakers of different languages (Goldin- Meadow, So, Özyürek, & Mylander, 2008) but it remains an empirical question whether it also draws on systematic patterns to distinguish different semantic categories. Twenty speakers of Dutch participated in a pantomime generation task. They had to produce a gesture that conveyed the same meaning as a word on a computer screen without speaking. Participants saw 10 words from three semantic categories: actions with objects (e.g., to drink), manipulable objects (e.g., mug), and non-manipulable objects (e.g., building). Pantomimes were categorised according to their mode of representation and also the use of deictics (pointing, showing or eye gaze). Further, ordering of different representations were noted when there were more than one gesture produced. Actions with objects elicited mainly individual gestures (mean: 1.1, range: 1-2), while manipulable objects (mean: 1.8, range: 1-4) and non-manipulable objects (mean: 1.6, range: 1-4) elicited primarily more than one pantomime as sequences of interrelated gestures. Actions with objects were mostly represented with one gesture, and through re-enactment of the action (e.g., raising a closed fist to the mouth for ‘to drink’) while manipulable objects mostly were represented through an acting gesture followed by a deictic (e.g., raising a closed fist to the mouth and then pointing at the fist). Non-manipulable objects, however, were represented through a drawing gesture followed by an acting one (e.g., tracing a rectangle and then pretending to walk through a door). In the absence of language the form of gestures is constrained by objects’ affordances (i.e., manipulable or not) and the communicative need to discriminate across semantic categories (i.e., objects or action). Gestures adopt an acting or drawing mode of representation depending on the affordances of the referent; which echoes patterns observed in the forms of co-speech gestures (Masson-Carro, Goudbeek, & Krahmer, 2015). We also show for the first time that use and ordering of deictics and the different modes of representation operate in tandem to distinguish between semantically related concepts (e.g., to drink and mug). When forced to communicate without language, participants show consistent patterns in their strategies to distinguish different semantic categories
  • Ozyurek, A., & Ortega, G. (2016). Language in the visual modality: Co-speech Gesture and Sign. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.

    Abstract

    As humans, our ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures used in spoken languages. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression Co-speech gestures, though non-linguistic, are produced and perceived in tight semantic and temporal integration with speech. Thus, language—in its primary face-to-face context (both phylogenetically and ontogenetically) is a multimodal phenomenon. In fact visual modality seems to be a more common way of communication than speech -when we consider both deaf and hearing individuals. Most research in language, however, has focused mostly on spoken/written language and has rarely considered the visual context it is embedded in to understand our linguistic capacity. This talk give a brief review on what know so far about what the visual expressive resources of language look like in both spoken and sign languages and their role in communication and cognition- broadening our scope of language. We will argue, based on these recent findings, that our models of language need to take visual modes of communication into account and provide a unified framework for how semiotic and expressive resources of the visual modality are recruited both for spoken and sign languages and their consequences for processing-also considering their neural underpinnings
  • Slonimska, A., Ozyurek, A., & Campisi, E. (2016). The role of addressee’s age in use of ostensive signals to gestures and their effectiveness. Talk presented at the 3rd Attentive Listener in the Visual World (AttLis 2016) workshop. Potsdam, Germany. 2016-03-10 - 2016-03-11.
  • Slonimska, A., Ozyurek, A., & Campisi, E. (2016). Markers of communicative intent through ostensive signals and their effectiveness in multimodal demonstrations to adults and children. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.

    Abstract

    In face-to-face interaction people adapt their multimodal message to fit their addressees’ informational needs. In doing so they are likely to mark their communicative intent by accentuating the relevant information provided by both speech and gesture. In the present study we were interested in the strategies by which speakers highlight their gestures (by means of ostensive signals like eye gaze and/or ostensive speech) for children in comparison to adults in a multimodal demonstration task. Moreover, we investigated the effectiveness of the ostensive signals to gestures and asked whether addressees shift their attention to the gestures highlighted by the speakers through different ostensive signals. Previous research has identified some of these ostensive signals (Streeck 1993; Gullberg & Kita 2009), but have not investigated how often they occur and whether they are designed for and attended to by different types of addressees. 48 Italians, born and raised in Sicily, participated in the study. 16 chosen Italian adult participants (12 female, 7 male, age range 20-30) were assigned the role of the speakers, while other 16 adults and 16 children (age range 9-10) had a role of the addressees. The task of the speaker was to describe the rules of a children’s game, which consists of using wooden blocks of different shapes to make a path without gaps. Speakers’ descriptions were coded for words and representational gestures, as well as for three types of ostensive signals highlighting the gestures – 1) eye gaze, 2) ostensive speech and 3) combination of eye gaze and ostensive speech to gesture. Addressees’ eye gaze to speakers’ gestures were coded and annotated whether eye gaze was directed to highlighted or not highlighted gesture. Overall eye gaze was the most common signal followed by ostensive speech and multimodal signals. We found that speakers were likely to highlight more gestures with children than with adults when all three types of signals were considered together. However, when treated separately, results revealed that speakers used more combined ostensive signals for children than for adults, but they were also likely to use more eye gaze towards their gestures with other adults than with children. Furthermore, both groups of addressees gazed more at gestures highlighted by the speakers in comparison to gestures that were not highlighted at all. The present study provides the first quantitative insights in regard to how speakers highlight their gestures and whether the age of the addressee influences the effectiveness of the ostensive signals. Speakers mark the communicative relevance of their gestures with different types of ostensive signals and by taking different types of addressees into account. In turn, addressees - not only adults but also children – take advantage of the provided signals to these gestures
  • Azar, Z., Backus, A., & Ozyurek, A. (2015). Multimodal reference tracking in monolingual and bilingual discourse. Talk presented at the Nijmegen-Tilburg Multimodality Workshop. Tilburg, The Netherlands. 2015-10-22.
  • Drijvers, L., & Ozyurek, A. (2015). Visible speech enhanced: What do gestures and lips contribute to speech comprehension in noise?. Talk presented at the Nijmegen-Tilburg Multi-modality workshop. Tilburg, The Netherlands. 2015-10-22.
  • Ozyurek, A. (2015). The role of gesture in language evolution: Beyond the gesture-first hypotheses. Talk presented at the SMART Cognitive Science: the Amsterdam Conference – Workshop, Evolution of Language: The co-evolution of biology and culture. Amsterdam, the Netherlands. 2015-03-25 - 2015-03-26.

    Abstract

    It has been a popular view to propose that gesture preceded and paved the way for the evolution of (spoken) language (e.g., Corballis, Tomasello, Arbib). However these views do not take into account the recent findings on the neural and cognitive infrastructure of how modern humans (adults and children) use gestures in various communicative contexts. Based on this current knowledge I will revisit gesture-first theories of language evolution and discuss alternatives more compatible with the multimodal nature of modern human language
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal gyrus in the integration of pointing gestures and speech. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-04.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-05.
  • Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Talk presented at Donders Discussions 2015. Nijmegen, The Netherlands. 2015-11-05.
  • Slonimska, A., Ozyurek, A., & Campisi, E. (2015). Markers of communicative relevance of gesture. Talk presented at the “Nijmegen-Tilburg Multi-modality“ workshop. Tilburg, The Netherlands. 2015-10-24.
  • Slonimska, A., Ozyurek, A., & Campisi, E. (2015). Ostensive signals: Markers of communicative relevance of gesture during demonstration to adults and children. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-04.
  • Azar, Z., Backus, A., & Ozyurek, A. (2014). Discourse management: Reference tracking of subject referents in speech and gesture in Turkish narratives. Talk presented at the 17th International Conference on Turkish Linguistics. Rouen, France. 2014-09-03 - 2014-09-05.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th International Society for Gesture Studies Congress. San Diego, California, USA. 2014-07-08 - 2014-07-11.

    Abstract

    Background: Gesture and speech are theorized to form a sin- gle integrated system of meaning during language produc- tion (McNeill, 1992), and evidence is mounting that this in- tegration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multi- modal videos that presented auditory (words) and visual (ges- tures and actions on objects) information. Half the subjects related the audio information to a written prime presented be- fore the video, and the other half related the visual informa- tion to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one but- ton if the written prime was the same as the visual (31 sub- jects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming se- quence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual tar- get tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p < .001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying ac- tions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and atten- tional processes). However, there were no significant differ- ences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and ex- tends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in rela- tion to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS 6). San Diego, CA, USA. 2014-07-08 - 2014-07-11.

    Abstract

    Background: Gesture and speech are theorized to form a single integrated system of meaning during language production (McNeill, 1992), and evidence is mounting that this integration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one button if the written prime was the same as the visual (31 subjects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming sequence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual target tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p<.001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying actions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and attentional processes). However, there were no significant differences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and extends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in relation to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS6). San Diego, Cal. 2014-07-08 - 2014-07-11.
  • Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. Talk presented at the 36th Annual Meeting of the Cognitive Science Society (CogSci2014). Québec City, Canada. 2014-07-23 - 2014-07-26.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., & Ozyurek, A. (2013). Multi-modal language comprehension as a joint activity: The influence of eye gaze on the processing of speech and co-speech gesture in multi-party communication. Talk presented at the 5th Joint Action Meeting. Berlin. 2013-07-26 - 2013-07-29.

    Abstract

    Traditionally, language comprehension has been studied as a solitary and unimodal activity. Here, we investigate language comprehension as a joint activity, i.e., in a dynamic social context involving multiple participants in different roles with different perspectives, while taking into account the multimodal nature of facetoface communication. We simulated a triadic communication context involving a speaker alternating her gaze between two different recipients, conveying information not only via speech but gesture as well. Participants thus viewed videorecorded speechonly or speech+gesture utterances referencing objects (e.g., “he likes the laptop”/+TYPING ON LAPTOPgesture) when being addressed (direct gaze) or unaddressed (averted gaze). The videoclips were followed by two object images (laptoptowel). Participants’ task was to choose the object that matched the speaker’s message (i.e., laptop). Unaddressed recipients responded significantly slower than addressees for speechonly utterances. However, perceiving the same speech accompanied by gestures sped them up to levels identical to that of addressees. Thus, when speech processing suffers due to being unaddressed, gestures become more prominent and boost comprehension of a speaker’s spoken message. Our findings illuminate how participants process multimodal language and how this process is influenced by eye gaze, an important social cue facilitating coordination in the joint activity of conversation.
  • Holler, J., Kelly, S., Hagoort, P., Schubotz, L., & Ozyurek, A. (2013). Speakers' social eye gaze modulates addressed and unaddressed recipients' comprehension of gesture and speech in multi-party communication. Talk presented at the 5th Biennial Conference of Experimental Pragmatics (XPRAG 2013). Utrecht, The Netherlands. 2013-09-04 - 2013-09-06.
  • Ortega, G., & Ozyurek, A. (2013). Gesture-sign interface in hearing non-signers' first exposure to sign. Talk presented at the Tilburg Gesture Research Meeting [TiGeR 2013]. Tilburg, the Netherlands. 2013-06-19 - 2013-06-21.

    Abstract

    Natural sign languages and gestures are complex communicative systems that allow the incorporation of features of a referent into their structure. They differ, however, in that signs are more conventionalised because they consist of meaningless phonological parameters. There is some evidence that despite non-signers finding iconic signs more memorable they can have more difficulty at articulating their exact phonological components. In the present study, hearing non-signers took part in a sign repetition task in which they had to imitate as accurately as possible a set of iconic and arbitrary signs. Their renditions showed that iconic signs were articulated significantly less accurately than arbitrary signs. Participants were recalled six months later to take part in a sign generation task. In this task, participants were shown the English translation of the iconic signs they imitated six months prior. For each word, participants were asked to generate a sign (i.e., an iconic gesture). The handshapes produced in the sign repetition and sign generation tasks were compared to detect instances in which both renditions presented the same configuration. There was a significant correlation between articulation accuracy in the sign repetition task and handshape overlap. These results suggest some form of gestural interference in the production of iconic signs by hearing non-signers. We also suggest that in some instances non-signers may deploy their own conventionalised gesture when producing some iconic signs. These findings are interpreted as evidence that non-signers process iconic signs as gestures and that in production, only when sign and gesture have overlapping features will they be capable of producing the phonological components of signs accurately.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the form of pointing gestures. Talk presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013). Berlin, Germany. 2013-08-01 - 2013-08-03.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). The influence of gaze direction on the comprehension of speech and gesture in triadic communication. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012). Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    Human face-to-face communication is a multi-modal activity. Recent research has shown that, during comprehension, recipients integrate information from speech with that contained in co-speech gestures (e.g., Kelly et al., 2010). The current studies take this research one step further by investigating the influence of another modality, namely eye gaze, on speech and gesture comprehension, to advance our understanding of language processing in more situated contexts. In spite of the large body of literature on processing of eye gaze, very few studies have investigated its processing in the context of communication (but see, e.g., Staudte & Crocker, 2011 for an exception). In two studies we simulated a triadic communication context in which a speaker alternated their gaze between our participant and another (alleged) participant. Participants thus viewed speech-only or speech + gesture utterances either in the role of addressee (direct gaze) or in the role of unaddressed recipient (averted gaze). In Study 1, participants (N = 32) viewed video-clips of a speaker producing speech-only (e.g. “she trained the horse”) or speech+gesture utterances conveying complementary information (e.g. “she trained the horse”+WHIPPING gesture). Participants were asked to judge whether a word displayed on screen after each video-clip matched what the speaker said or not. In half of the cases, the word matched a previously uttered word, requiring a “yes” answer. In all other cases, the word matched the meaning of the gesture the actor had performed, thus requiring a ‘no’ answer.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 5th Conference of the International Society for Gesture Studies (ISGS 5). Lund, Sweden. 2012-07-24 - 2012-07-27.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012). Sapporo, Japan. 2012-08-01 - 2012-08-04.
  • Kelly, S., Ozyurek, A., Healey, M., & Holler, J. (2012). The communicative influence of gesture and action during speech comprehension: Gestures have the upper hand. Talk presented at the Acoustics 2012 Hong Kong Conference and Exhibition. Hong Kong. 2012-05-13 - 2012-05-18.
  • Peeters, D., & Ozyurek, A. (2012). The role of contextual factors in the use of demonstratives: Differences between Turkish and Dutch. Talk presented at the 6th Lodz Symposium: New Developments in Linguistic Pragmatics. Lodz, Poland. 2012-05-26 - 2012-05-28.

    Abstract

    An important feature of language is that it enables human beings to refer to entities, actions and events in the external world. In everyday interaction, one can refer to concrete entities in the extra-linguistic physical environment of a conversation by using demonstratives such as this and that. Traditionally, the choice of which demonstrative to use has been explained in terms of the distance of the referent [1]. In contrast, recent observational studies in different languages have suggested that factors such as joint attention also play an important role in demonstrative choice [2][3]. These claims have never been tested in a controlled setting and across different languages. There-fore, we tested demonstrative choice in a controlled elicitation task in two languages that previously have only been studied observational-ly: Turkish and Dutch. In our study, twenty-nine Turkish and twenty-four Dutch partic-ipants were presented with pictures including a speaker, an address-ee and an object (the referent). They were asked which demonstra-tive they would use in the depicted situations. Besides the distance of the referent, we manipulated the addressee’s focus of visual atten-tion, the presence of a pointing gesture, and the sentence type. A re-peated measures analysis of variance showed that, in addition to the distance of the referent, the focus of attention of the addressee on the referent and the type of sentence in which a demonstrative was used, influenced demonstrative choice in Turkish. In Dutch, only the dis-tance of the referent and the sentence type influenced demonstrative choice. Our cross-linguistic findings show that in different languages, people take into account both similar and different aspects of triadic situations to select a demonstrative. These findings reject descrip-tions of demonstrative systems that explain demonstrative choice in terms of one single variable, such as distance. The controlled study of referring acts in triadic situations is a valuable extension to observa-tional research, in that it gives us the possibility to look more specifi-cally into the interplay between language, attention, and other con-textual factors influencing how people refer to entities in the world References: [1] Levinson, S. C. (1983). Pragmatics. Cambridge: Cambridge University Press. [2] Diessel, H. (2006). Demonstratives, joint attention and the emergence of grammar. Cognitive Linguistics 17:4. 463–89. [3] Küntay, A. C. & Özyürek, A. (2006). Learning to use demonstratives in conversation: what do language specific strategies in Turkish reveal? Journal of Child Language 33. 303–320.
  • Ozyurek, A. (2011). Language in our hands: The role of the body in language, cognition and communication [Inaugural lecture]. Talk presented at The Radboud University Nijmegen. Nijmegen, The Netherlands. 2011-05-26.
  • Nyst, V., De Vos, C., Perniss, P. M., & Ozyurek, A. (2007). The typology of space in sign languages: Developing a descriptive format for cross-linguistic comparison. Talk presented at Cross-Linguistic Research on Sign Languages 2. Max Planck Institute for Psycholinguistics, Nijmegen. 2007-04-13.

Share this page