Asli Ozyurek

Presentations

Displaying 1 - 100 of 142
  • Akamine, S., Dingemanse, M., Meyer, A. S., & Ozyurek, A. (2023). Contextual influences on multimodal alignment in Zoom interaction. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
  • Ariño-Bizarro, A., Özyürek, A., & Ibarretxe-Antuñano, I. (2023). What do gestures reveal about the coding of causality in Spanish?. Talk presented at the 8th Gesture and Speech in Interaction (GESPIN 2023). Nijmegen, The Netherlands. 2023-09-13 - 2023-09-15.
  • Campisi, E., Slonimska, A., & Özyürek, A. (2023). Cross-linguistic differences in the use of iconicity as a communicative strategy. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
  • Chen, X., Hu, J., Huettig, F., & Özyürek, A. (2023). The effect of iconic gestures on linguistic prediction in Mandarin Chinese: a visual world paradigm study. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Long, M., Özyürek, A., & Rubio-Fernández, P. (2023). Psychological proximity guides multimodal communication. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
  • Long, M., Özyürek, A., & Rubio-Fernandez, P. (2023). The role of pointing and joint attention on demonstrative use in Turkish. Poster presented at the 1st International Multimodal Communication Symposium (MMSYM 2023), Barcelona, Spain.
  • Mamus, E., Speed, L. J., Ortega, G., Majid, A., & Ozyurek, A. (2023). Differences in gestural representations of concepts in blind and sighted individuals. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
  • Mamus, E., Speed, L. J., Ortega, G., Majid, A., & Özyürek, A. (2023). Visual experience influences silent gesture productions across semantic categories. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
  • Mamus, E., Speed, L. J., Ortega, G., Majid, A., & Özyürek, A. (2023). Lack of visual experience influences silent gesture productions across semantic categories. Poster presented at the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023), Sydney, Australia.
  • Mamus, E., Speed, L. J., Ortega, G., Majid, A., & Özyürek, A. (2023). Gestural representations of semantic concepts differ between blind and sighted individuals. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Özyürek, A. (2023). Multimodality as a design feature of human language: Insights from brain, behavior and diversity [keynote]. Talk presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023). Marseille, France. 2023-10-24 - 2023-10-26.
  • Slonimska, A., Özyürek, A., & Capirci, O. (2023). Communicative efficiency in sign languages: The role of the visual modality-specific properties. Talk presented at the 16th International Cognitive Linguistics Conference (ICLC 16). Düsseldorf, Germany. 2023-08-07 - 2023-08-11.
  • Kan, U., Gökgöz, K., Sumer, B., Tamyürek, E., & Özyürek, A. (2022). Emergence of negation in a Turkish homesign system: Insights from the family context. Talk presented at the Joint Conference on Language Evolution (JCoLE). Kanazawa, Japan. 2022-09-05 - 2022-09-08.
  • Karadöller, D. Z., Manhardt, F., Peeters, D., Özyürek, A., & Ortega, G. (2022). Beyond cognates: Both iconicity and gestures pave the way for speakers in learning signs in L2 at first exposure. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Karadöller, D. Z., Manhardt, F., Peeters, D., Özyürek, A., & Ortega, G. (2022). Beyond cognates: Both iconicity and gestures pave the way for speakers in learning signs in L2 at first exposure. Talk presented at the International Conference on Sign Language Acqusition (ICSLA 4). online. 2022-06-23 - 2022-06-25.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2022). Relationship between spatial language experience and spatial memory: Evidence from deaf children with late sign language exposure. Talk presented at the International Conference on Sign Language Acqusition (ICSLA 4). online. 2022-06-23 - 2022-06-25.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2022). Geç işaret dilini ediniminin uzamsal dil ve bellek ilişkisine etkileri [Effect of late sign language acquisition on the relationship between spatial language and memory]. Talk presented at 3. Gelişim Psikolojisi Sempozyumu [3rd Symposium on Developmental Psychology]. Istanbul, Turkey. 2022-07-08 - 2022-07-09.
  • Kırbaşoğlu, K., Ünal, E., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2022). Konuşma ve jestlerde uzamsal ifadelerin gelişimi [Development of spatial expressions on speech and gesture]. Poster presented at 3. Gelişim Psikolojisi Sempozyumu [3rd Symposium on Developmental Psychology], Istanbul, Turkey.
  • Mamus, E., Speed, L., Özyürek, A., & Majid, A. (2022). Sensory modality influences the encoding of motion events in speech but not co-speech gestures. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Mamus, E., Speed, L., Rissman, L., Majid, A., & Özyürek, A. (2022). Visual experience affects motion event descriptions in speech and gesture. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Özyürek, A., Ünal, E., Manhardt, F., & Brouwer, S. (2022). Modality specific differences in speech, gesture and sign modulate visual attention differentially during message preparation. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Özyürek, A. (2022). Multimodality as design feature of human language capacity [keynote]. Talk presented at Institute on Multimodality 2022: Minds, Media, Technology. Bielefeld, Germany. 2022-08-28 - 2022-09-06.
  • Sekine, K., & Özyürek, A. (2022). Gestures give a hand to children's understanding of degraded speech. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Slonimska, A., Özyürek, A., & Capirci, O. (2022). Simultaneity as an emergent property of sign languages. Talk presented at the Joint Conference on Language Evolution (JCoLE). Kanazawa, Japan. 2022-09-05 - 2022-09-08.
  • Sumer, B., & Özyürek, A. (2022). Language use in deaf children with early-signing versus late-signing deaf parents. Talk presented at the International Conference on Sign Language Acqusition (ICSLA 4). online. 2022-06-23 - 2022-06-25.
  • Ünal, E., Kırbaşoğlu, K., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2022). Children's multimodal spatial expressions vary across the complexity of relations. Poster presented at the 8th International Symposium on Brain and Cognitive Science, online.
  • Karadöller, D. Z., Sumer, B., Ozyurek, A., & Ünal, E. (2021). Producing informative expressions of Left-Right relations: Differences between children and adults in using multimodal encoding strategies. Talk presented at the 15th International Congress for the Study of Child Language (IASCL 2021). online. 2021-07-15 - 2021-07-23.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2021). Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture. Talk presented at the 43rd Annual Meeting of the Cognitive Science Society (CogSci 2021). online. 2021-07-24 - 2021-07-26.
  • Mamus, E., Speed, L. J., Ozyurek, A., & Majid, A. (2021). Sensory modality of input influences encoding of motion events in speech but not co-speech gestures. Talk presented at the 43rd Annual Meeting of the Cognitive Science Society (CogSci 2021). online. 2021-07-26 - 2021-07-29.
  • Ozyurek, A. (2021). Not only the past but also the future of language is likely to be multimodal [plenary talk]. Talk presented at Protolang 7. (virtual conference). 2021-09-06 - 2021-09-08.
  • Ozyurek, A. (2021). Multimodal approaches to cross-linguistic differences in language structures, processing and acquisition [keynote]. Talk presented at Crosslinguistic Perspectives on Processing and Learning (X-PPL 2021). online. 2021-09-16 - 2021-09-17.
  • Rasenberg, M., Ozyurek, A., Pouw, W., & Dingemanse, M. (2021). The use of multimodal resources for joint meaning-making in conversational repair sequences. Talk presented at the Embodied Cognitive Science (ECogS) Seminar Series. Virtual meeting. 2021-12-10.
  • Rasenberg, M., Ozyurek, A., & Dingemanse, M. (2021). The use of multimodal resources for joint meaning-making in conversational repair sequences. Talk presented at the 5th International Conference on Interactivity, Language & Cognition. Virtual meeting. 2021-09-15 - 2021-09-19.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2020). Sign advantage for children: Signing children’s spatial expressions are more informative than speaking children’s speech and gestures combined. Talk presented at the 45th Annual Boston University (Virtual) Conference on Language Development (BUCLD 45). Boston, MA, USA. 2020-11-05 - 2020-11-08.
  • Özer, D., Karadöller, D. Z., Türkmen, I., Ozyurek, A., & Göksun, T. (2020). Informativeness of gestures in speech context guides visual attention during comprehension of spatial language. Talk presented at the 7th Gesture and Speech Interaction (GESPIN 2020). Stockholm, Sweden. 2020-09-07 - 2020-09-09.
  • Ozyurek, A. (2020). From hands to brains: How does human body talk, think and interact in face-to-face language use? [keynote]. Talk presented at the 22nd ACM International (Virtual) Conference on Multimodal Interaction (ICMI 2020). Utrecht, The Netherlands. 2020-10-25 - 2020-10-29.
  • Drijvers, L., Spaak, E., Herring, J., Ozyurek, A., & Jensen, O. (2019). Selective routing and integration of speech and gestural information studied by rapid invisible frequency tagging. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.
  • Karadöller, D. Z., Ünal, E., Sumer, B., & Ozyurek, A. (2019). Children but not adults use both speech and gesture to produce informative expressions of Left-Right relations. Poster presented at the Donders Poster Sessions 2019, Nijmegen, The Netherlands.
  • Karadöller, D. Z., Ünal, E., Sumer, B., Göksun, T., Özer, D., & Ozyurek, A. (2019). Children but not adults use both speech and gesture to produce informative expressions of Left-Right relations. Poster presented at the 44th Annual Boston University Conference on Language Development (BUCLD 44), Boston, MA, USA.
  • Mamus, E., Rissman, L., Majid, A., & Ozyurek, A. (2019). Effects of blindfolding on verbal and gestural expression of path in auditory motion events. Poster presented at the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019), Montreal, Canada.
  • Manhardt, F., Brouwer, S., Sumer, B., & Ozyurek, A. (2019). Cross-modal conceptual transfer in bimodal bilinguals. Poster presented at the LingCologne 2019 conference, Cologne, Germany.
  • Manhardt, F., Brouwer, S., & Ozyurek, A. (2019). Cross-modal transfer in bimodal bilinguals: Implications for a multimodal language production model. Talk presented at the Language Division colloquium. Radboud University, Nijmegen, The Netherlands. 2019-03.
  • Manhardt, F., Brouwer, S., & Ozyurek, A. (2019). Sign influences spatial encoding in speech in bimodal bilinguals. Talk presented at the 13th conference of Theoretical Issues in Sign Language Research (TISLR 13). Hamburg, Germany. 2019-09-26 - 2019-09-28.
  • Rasenberg, M., Dingemanse, M., & Ozyurek, A. (2019). Lexical and gestural alignment in collaborative referring. Talk presented at the 6th European and 9th Nordic Symposium on Multimodal Communication (MMSYM). leuven, Belgium. 2019-09-09 - 2019-09-10.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2019). Dismantling the notion of constructed action as a metalinguistic tool: Efficient information encoding through direct representation. Poster presented at the 13th conference of Theoretical Issues in Sign Language Research (TISLR 13), Hamburg, Germany.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2019). The role of iconicity and simultaneity for efficient information encoding in signed languages: A case of Italian Sign Language (LIS). Talk presented at the 17th workshop on Iconicity (ILL17). Lund, Sweden. 2019-05-03.
  • Sumer, B., Schoon, V., & Ozyurek, A. (2019). Child-directed spatial language input in sign language: Modality specific and general patterns. Poster presented at the 13th conference of Theoretical Issues in Sign Language Research (TISLR 13), Hamburg, Germany.
  • Blokpoel, M., Dingemanse, M., Kachergis, G., Bögels, S., Drijvers, L., Eijk, L., Ernestus, M., De Haas, N., Holler, J., Levinson, S. C., Lui, R., Milivojevic, B., Neville, D., Ozyurek, A., Rasenberg, M., Schriefers, H., Trujillo, J. P., Winner, T., Toni, I., & Van Rooij, I. (2018). Ambiguity helps higher-order pragmatic reasoners communicate. Talk presented at the 14th biannual conference of the German Society for Cognitive Science, GK (KOGWIS 2018). Darmstadt, Germany. 2018-09-03 - 2018-09-06.
  • Bögels, S., Milvojevic, B., De Haas, N., Döller, C., Rasenberg, M., Ozyurek, A., Dingemanse, M., Eijk, L., Ernestus, M., Schriefers, H., Blokpoel, M., Van Rooij, I., Levinson, S. C., & Toni, I. (2018). Creating shared conceptual representations. Poster presented at the 10th Dubrovnik Conference on Cognitive Science, Dubrovnik, Croatia.
  • Capirci, O., Slonimska, A., & Ozyurek, A. (2018). Constructed representation of transitive actions in Italian Sign Language: Agent’s or patient’s perspective?. Talk presented at the Sign-Café workshop. Bimingham, UK. 2018-07-30.
  • Drijvers, L., Spaak, E., Herring, J., Ozyurek, A., & Jensen, O. (2018). Selective routing and integration of speech and gestural information studied by rapid invisible frequency tagging. Poster presented at the Attention to Sound Meeting, Chicheley, UK.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2018). Delayed sign language acquisition: How does it impact spatial language use?. Talk presented at Sign Pop Up Meetings 2018. Nijmegen, The Netherlands. 2018-04-03 - 2018-02-28.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2018). Delayed sign language acquisition: How does it impact spatial language use?. Talk presented at the Center for Language Colloquium Series 2018. Nijmegen, The Netherlands. 2018-03-14 - 2018-02-28.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2018). Effects of delayed sign language exposure on acquisition of static spatial relations. Poster presented at the Nijmegen Lectures 2018, Nijmegen, The Netherlands.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2018). Effects of delayed sign language exposure on spatial language acquisition by deaf children and adults. Poster presented at the 3rd International Conference on Sign Language Acquisition (ICSLA 2018), Istanbul, Turkey.
  • Manhardt, F., Sumer, B., Brouwer, S., & Ozyurek, A. (2018). Iconicity matters: Signers and speakers view spatial relations differently prior to linguistic production. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
  • Manhardt, F., Brouwer, S., Sumer, B., & Ozyurek, A. (2018). Iconicity matters: Signers and speakers view spatial relations differently prior to linguistic production. Talk presented at the seventh meeting of the Formal and Experimental Advances in Sign language Theory (FEAST 2018). Venice, Italy. 2018-06-18 - 2018-06-20.
  • Schubotz, L., Ozyurek, A., & Holler, J. (2018). Age-related differences in multimodal recipient design. Poster presented at the 10th Dubrovnik Conference on Cognitive Science, Dubrovnik, Croatia.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2018). Elicitation task for simultaneous encoding in signed languages. Poster presented at the Sign Language Acquisition and Assessment conference SLAAC, Haifa, Israel.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2018). L’uso della simultaneità per trasmettere messaggi densi di informaizoni in lingua dei segni italiana (LIS). Talk presented at the 4° Convegno Nazionale LIS. Rome, Italy. 2018-11-10.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2018). Simultaneous information encoding in Italian Sign Language. Talk presented at the fifth Attentive Listener in the Visual World (AttLis 2018). Trondheim, Norway. 2018-08-29 - 2018-08-30.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2018). Simultaneous information encoding in Italian Sign Language LIS: Methodology and preliminary results. Poster presented at the IMPRS Conference on Interdisciplinary Approaches in the Language Sciences, Nijmegen, The Netherlands.
  • Azar, Z., Backus, A., & Ozyurek, A. (2017). Bidirectional contact effects in proficient heritage speakers: Subject reference in Turkish and Dutch. Talk presented at the 11th International Symposium on Bilingualism (ISB11). University of Limerick, Limerick, Ireland. 2017-06-11 - 2017-06-15.
  • Azar, Z., Backus, A., & Ozyurek, A. (2017). Gender effect on the choice of referring expressions: The influence of language typology and bilingualism. Poster presented at DETEC 2017: Discourse Expectations: Theoretical, Experimental and Computational perspectives, Nijmegen, The Netherlands.
  • Azar, Z., Backus, A., & Ozyurek, A. (2017). Highly proficient bilinguals maintain language-specific pragmatic constraints on pronouns: Evidence from speech and gesture. Talk presented at the 39th Annual Conference of the Cognitive Science Society (CogSci 2017). London, UK. 2017-07-26 - 2017-07-29.

    Abstract

    The use of subject pronouns by bilingual speakers using both a pro-drop and a non-pro-drop language (e.g. Spanish heritage speakers in the USA) is a well-studied topic in research on cross-linguistic influence in language contact situations. Previous studies looking at bilinguals with different proficiency levels have yielded conflicting results on whether there is transfer from the non-pro-drop patterns to the pro-drop language. Additionally, previous research has focused on speech patterns only. In this paper, we study the two modalities of language, speech and gesture, and ask whether and how they reveal cross-linguistic influence on the use of subject pronouns in discourse. We focus on elicited narratives from heritage speakers of Turkish in the Netherlands, in both Turkish (pro-drop) and Dutch (non-pro-drop), as well as from monolingual control groups. The use of pronouns was not very common in monolingual Turkish narratives and was constrained by the pragmatic contexts, unlike in Dutch. Furthermore, Turkish pronouns were more likely to be accompanied by localized gestures than Dutch pronouns, presumably because pronouns in Turkish are pragmatically marked forms. We did not find any cross-linguistic influence in bilingual speech or gesture patterns, in line with studies (speech only) of highly proficient bilinguals. We therefore suggest that speech and gesture parallel each other not only in monolingual but also in bilingual production. Highly proficient heritage speakers who have been exposed to diverse linguistic and gestural patterns of each language from early on maintain monolingual patterns of pragmatic constraints on the use of pronouns multimodally.
  • Azar, Z., Backus, A., & Ozyurek, A. (2017). Reference tracking in Turkish and Dutch narratives: Effect of co-reference context and gender on the choice of referring expressions. Talk presented at the Grammar and Cognition Colloquium. Radboud University, Nijmegen, The Netherlands. 2017-05-12.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2017). Alpha and beta oscillations in the language network, motor and visual cortex index semantic congruency between speech and gestures in clear and degraded speech. Poster presented at the 47th Annual Meeting of the Society for Neuroscience (SfN), Washington, DC, USA.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2017). Alpha and beta oscillations in the language network, motor and visual cortex index the semantic integration of speech and gestures in clear and degraded speech. Poster presented at the Ninth Annual Meeting of the Society for the Neurobiology of Language (SNL 2017), Baltimore, MD, USA.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2017). Low- and high-frequency oscillations predict the semantic integration of speech and gestures in clear and degraded speech. Poster presented at the Neural Oscillations in Speech and Language Processing symposium, Berlin, Germany.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2017). Effects of delayed language exposure on spatial language acquisition by signing children and adults. Poster presented at the 39th Annual Conference of the Cognitive Science Society (CogSci 2017), London, UK.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2017). Effects of delayed sign language exposure on acquisition of spatial event descriptions. Poster presented at the workshop 'Event Representations in Brain, Language & Development' (EvRep), Nijmegen, The Netherlands.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2017). Effects of delayed sign language exposure on acquisition of static spatial relations. Poster presented at the Donders Poster Sessions, Nijmegen, The Netherlands.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2017). Effects of delayed sign language exposure on spatial language acquisition. Talk presented at the Spatial Language and Spatial Cognition Workshop. Trondheim, Norway. 2017-12-06 - 2017-12-07.
  • Manhardt, F., Brouwer, S., Sumer, B., & Ozyurek, A. (2017). Iconicity of linguistic expressions influences visual attention to space: A comparison between signers and speakers. Poster presented at the workshop 'Event Representations in Brain, Language & Development' (EvRep), Nijmegen, The Netherlands.
  • Manhardt, F., Brouwer, S., Sumer, B., Karadöller, D. Z., & Ozyurek, A. (2017). The influence of iconic linguistic expressions on spatial event cognition across signers and speakers: An eye-tracking study. Poster presented at the sixth meeting of Formal and Experimental Advances in Sign Language Theory (FEAST 2017), Reykjavík, Iceland.
  • Manhardt, F., Brouwer, S., Sumer, B., Karadöller, D. Z., & Ozyurek, A. (2017). The influence of iconic linguistic expressions on spatial event cognition across signers and speakers: An eye-tracking study. Poster presented at the workshop Types of Iconicity in Language Use, Development, and Processing, Nijmegen, The Netherlands.
  • Ortega, G., & Ozyurek, A. (2017). Types of iconicity and combinatorial strategies distinguish semantic categories in the manual modality across cultures. Poster presented at the 30th Annual CUNY Conference on Human Sentence Processing, Cambridge, MA, USA.
  • Ter Bekke, M., Ünal, E., Karadöller, D. Z., & Ozyurek, A. (2017). Cross-linguistic effects of speech and gesture production on memory of motion events. Poster presented at the workshop 'Event Representations in Brain, Language & Development' (EvRep), Nijmegen, The Netherlands.
  • Azar, Z., Backus, A., & Ozyurek, A. (2016). Influence of culture and language on bilinguals’ speech and gesture: Evidence from Turkish-Dutch bilinguals. Talk presented at CLS Lunch Colloquium. Radboud University, Nijmegen, The Netherlands. 2016-04-19.
  • Azar, Z., Backus, A., & Ozyurek, A. (2016). Multimodal reference tracking in Dutch and Turkish discourse: Role of culture and typological differences. Poster presented at the 7th Conference of the International Society for Gesture Studies (ISGS7), Paris, France.

    Abstract

    Previous studies show that during discourse narrations, speakers use fuller forms in speech (e.g. full noun phrase (NP) and gesture more while referring back to already introduced referents and use reduced forms in speech (e.g. overt pronoun and null pronoun) and gesture less while maintaining referents (Gullberg, 2006; Yoshioko, 2008; Debreslioska et al., 2013; Perniss & Özyürek, 2015). Thus, quantity of coding material in speech and co-speech gesture shows parallelism. However, those studies focus mostly on Indo-European languages and we do not know much about whether the parallel relation between speech and co-speech gesture during discourse narration is generalizable to languages with different pronominal systems. Furthermore, these studies have not taken into account whether a language is used in a rich or low gesture culture as a possible modulating factor. Aiming to fill this gap, we directly compare multimodal discourse narrations in Turkish and Dutch; two languages that have different constraints on the use of overt pronoun (preferred in Dutch) versus null pronoun (preferred in Turkish) and vary in terms of whether gender is marked in the pronouns (Dutch) or not (Turkish). We elicited discourse narrations in Turkey and Netherlands from 40 speakers (20 Dutch; 20 Turkish) using 2 short silent videos. Each speaker was paired with a naive addressee during data collection. We first divided the discourse into main clauses. We then coded each animate subject referring expressions for the linguistic type (i.e., NP, pronoun, null pronoun) and the co-reference context (i.e., re-introduction, maintenance). As for the co-speech gesture data, we first coded all types of gestures in order to determine whether Turkish and Dutch cultures show difference in terms of the overall gesture rate (per clause). Later we focused on the abstract deictic gestures to space that temporally align with the subject referent of each main clause to calculate the proportion of gesturally marked subject referents. Our gesture rate analyses reveal that Turkish speakers overall produce more gestures than Dutch speakers (p<.001) suggesting that Turkish is a relatively high-gesture culture compared to Dutch. Our speech analyses show that both Turkish and Dutch speakers use mainly NPs to re-introduce subject referents and reduced forms for maintained referents (null pronoun for Turkish and overt pronoun for Dutch). Our gesture analyses show that both Turkish and Dutch speakers gestured more with re-introduced subject referents when compared to maintained subject referents (p<001). However, Turkish speakers gestured more frequently with pronouns than Dutch speakers. All results put together, we show that speakers of both languages organize information structure in discourse in similar manner and vary the quantity of coding material in their speech and gesture in parallel to mark the co-reference context, a discourse strategy independent of whether the speakers are from a relatively high or low gesture culture and regardless of the differences in the pronominal system of their languages. As a novel contribution, however, we show that pragmatics interacts with contextual and linguistic factors modulating gestures: Pragmatically marked forms in speech are more likely to be marked with gestures as well (more gestures with pronouns but not with NPs in Turkish compared to Dutch).
  • Azar, Z., Backus, A., & Ozyurek, A. (2016). Pragmatic relativity: Gender and context affect the use of personal pronouns in discourse differentially across languages. Talk presented at the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Philadelphia, PA, US. 2016-08-11 - 2016-08-13.

    Abstract

    Speakers use differential referring expressions in pragmatically appropriate ways to produce coherent narratives. Languages, however, differ in a) whether REs as arguments can be dropped and b) whether personal pronouns encode gender. We examine two languages that differ from each other in these two aspects and ask whether the co-reference context and the gender encoding options affect the use of REs differentially. We elicited narratives from Dutch and Turkish speakers about two types of three-person events, one including people of the same and the other of mixed-gender. Speakers re-introduced referents into the discourse with fuller forms (NPs) and maintained them with reduced forms (overt or null pronoun). Turkish speakers used pronouns mainly to mark emphasis and only Dutch speakers used pronouns differentially across the two types of videos. We argue that linguistic possibilities available in languages tune speakers into taking different principles into account to produce pragmatically coherent narratives
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Talk presented at Sensorimotor Speech Processing Symposium. London, UK. 2016-08-16.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the 20th International Conference on Biomagnetism (BioMag 2016), Seoul, South Korea.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Face-to-face communication involves the integration of speech and visual information, such as iconic co-speech gestures. Especially iconic gestures, that illustrate object attributes, actions and space, can enhance speech comprehension in adverse listening conditions (e.g. Holle et al., 2010). Using magnetoencephalography (MEG), we aimed at identifying the networks and the neuronal dynamics associated with enhancing (degraded) speech comprehension by gestures. Our central hypothesis was that gestures enhance degraded speech comprehension, and that decreases in alpha and beta power reflect engagement, whereas increases in gamma reflect active processing in task relevant networks (Jensen & Mazaheri, 2010; Jokisch & Jensen, 2007). Participants (n = 30) were presented with videos of an actress uttering Dutch action verbs. Speech was presented clear or degraded by applying noise-vocoding (6-band), and accompanied by videos of an actor performing iconic gesture depicting actions (clear speech+ gesture; C-SG, degraded speech+gesture; D-SG) or no gesture (clear speech only; C-S, degraded speech only; D-S). We quantified changes in time-frequency representations of oscillatory power as the video unfolded. The sources of the task-specific modulations were identified using a beamformer approach. Gestural enhancement, calculated by comparing (D-SG vs DS) to (C-SG vs CS), revealed significant interactions between the occurrence of a gesture and degraded speech particularly in the alpha, beta and gamma band. Gestural enhancement was reflected by a beta decrease in motor areas indicative of engagement of the motor system during gesture observation, especially when speech was degraded. A beta band decrease was also observed in the language network including left inferior frontal gyrus, a region involved in semantic unification operations, and left superior temporal regions. This suggests a higher semantic unification load when a gesture is presented together with degraded versus clear speech. We also observed a gestural enhancement effect in the alpha band in visual areas. This suggests that visual areas are more engaged when a gesture is present, most likely reflecting the allocation of visual attention, especially when speech is degraded, which is in line with the functional inhibition hypothesis (see Jensen & Mazaheri, 2010). Finally we observed gamma band effects in left-temporal areas suggesting facilitated binding of speech and gesture into a unified representation, especially when speech is degraded. In conclusion, our results support earlier claims on the recruitment of a left-lateralized network including motor areas, STS/MTG and LIFG in speech-gesture integration and gestural enhancement of speech (see Ozyurek, 2014). Our findings provide novel insight into the neuronal dynamics associated with speech-gesture integration: decreases in alpha and beta power reflect the engagement of respectively the visual and language/motor networks, whereas a gamma band increase reflects the integrations in left prefrontal cortex. In future work we will characterize the interaction between these networks by means of functional connectivity analysis.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraged speech comprehension engages the language network, motor cortex and visual cortex. Talk presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC). Nijmegen, The Netherlands. 2016-10-31 - 2016-11-01.
  • Drijvers, L., & Ozyurek, A. (2016). Native language status of the listener modulates the neural integration of speech and gesture in clear and adverse listening conditions. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Face-to-face communication consists of integrating speech and visual input, such as co-speech gestures. Iconic gestures (e.g. a drinking gesture) can enhance speech comprehension, especially when speech is difficult to comprehend, such as in noise (e.g. Holle et al., 2010) or in non-native speech comprehension (e.g. Sueyoshi & Hardison, 2005). Previous behavioral and neuroimaging studies have argued that the integration of speech and gestures is stronger when speech intelligibility decreases (e.g. Holle et al., 2010), but that in clear speech, non-native listeners benefit more from gestures than native listeners (Dahl & Ludvigson, 2014; Sueyoshi & Hardison, 2005). So far, the neurocognitive mechanisms of how non-native speakers integrate speech and gestures in adverse listening conditions remain unknown. We investigated whether high-proficient non-native speakers of Dutch make use of iconic co-speech gestures as much as native speakers during clear and degraded speech comprehension. In an EEG study, native (n = 23) and non-native (German, n = 23) speakers of Dutch watched videos of an actress uttering Dutch action verbs. Speech was presented either as clear or degraded by applying noise-vocoding (6-band), and accompanied by a matching or mismatching iconic gesture. This allowed us to calculate both the effects of speech degradation and semantic congruency of the gesture on the N400 component. The N400 was taken as an index of semantic integration effort (Kutas & Federmeier, 2011). In native listeners, N400 amplitude was sensitive to mismatches between speech and gesture and degradation; the most pronounced N400 was found in response to degraded speech and a mismatching gesture (DMM), followed by degraded speech and a matching gesture (DM), clear speech and a mismatching gesture (CMM), and clear speech and a matching gesture (CM) (DMM>DM>CMM>CM, all p < .05). In non-native speakers, we found a difference between CMM and CM but not DMM and DM. However, degraded conditions differed from clear conditions (DMM=DM>CMM>CM, all significant comparisons p < .05). Directly comparing native to non-native speakers, the N400 effect (i.e. the difference between CMM and CM / DMM and DM) was greater for non-native speakers in clear speech, but for native speakers in degraded speech. These results provide further evidence for the claim that in clear speech, non-native speakers benefit more from gestural information than native speakers, as indexed by a larger N400 effect for mismatch manipulation. Both native and non-native speakers show integration effort during degraded speech comprehension. However, native speakers require less effort to recognize auditory cues in degraded speech than non-native speakers, resulting in a larger N400 for degraded speech and a mismatching gesture for natives than non-natives. Conversely, non-native speakers require more effort to resolve auditory cues when speech is degraded and can therefore not benefit as much from auditory cues to map the semantic information from gesture to as native speakers. In sum, non-native speakers can benefit from gestural information in speech comprehension more than native listeners, but not when speech is degraded. Our findings suggest that the native language of the listener modulates multimodal semantic integration in adverse listening conditions.
  • Drijvers, L., & Ozyurek, A. (2016). Native language status of the listener modulates the neural integration of speech and gesture in clear and adverse listening conditions. Poster presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC), Nijmegen, The Netherlands.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Oscillatory and temporal dynamics show engagement of the language network, motor system and visual cortex during gestural enhancement of degraded speech. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-23 - 2016-11-24.
  • Drijvers, L., & Ozyurek, A. (2016). What do iconic gestures and visible speech contribute to degraded speech comprehension?. Poster presented at the Nijmegen Lectures 2016, Nijmegen, The Netherlands.
  • Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do iconic gestures and lip movements contribute to degraded speech comprehension?. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-22 - 2016-07-24.
  • Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do gestures and lip movements contribute to degraded speech comprehension?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
  • Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do iconic gestures and lip movements contribute to degraded speech comprehension?. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.

    Abstract

    Natural, face-to-face communication consists of an audiovisual binding that integrates speech and visual information, such as iconic co-speech gestures and lip movements. Especially in adverse listening conditions such as in noise, this visual information can enhance speech comprehension. However, the contribution of lip movements and iconic gestures to understanding speech in noise has been mostly studied separately. Here, we investigated the contribution of iconic gestures and lip movements to degraded speech comprehension in a joint context. In a free-recall task, participants watched short videos of an actress uttering an action verb. This verb could be presented in clear speech, severely degraded speech (2-band noise-vocoding) or moderately degraded speech (6-band noise-vocoding), and could view the actress with her lips blocked, with her lips visible, or with her lips visible and making an iconic co-speech gesture. Additionally, we presented these clips without audio and with just the lip movements present, or with just lip movements and gestures present, to investigate how much information listeners could get from visual input alone. Our results reveal that when listeners perceive degraded speech in a visual context, listeners benefit more from gestural information than from just lip movements alone. This benefit is larger at moderate noise levels where auditory cues are still moderately reliable than compared to severe noise levels where auditory cues are no longer reliable. As a result, listeners are only able to benefit from this additive effect of ‘double’ multimodal enhancement of iconic gestures and lip movements when there are enough auditory cues present to map lip movements to the phonological information in the speech signal
  • Drjiver, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Face-to-face communication involves the integration of speech and visual information, such as iconic co-speech gestures. Especially iconic gestures, that illustrate object attributes, actions and space, can enhance speech comprehension in adverse listening conditions (e.g. Holle et al., 2010). Using magnetoencephalography (MEG), we aimed at identifying the networks and the neuronal dynamics associated with enhancing (degraded) speech comprehension by gestures. Our central hypothesis was that gestures enhance degraded speech comprehension, and that decreases in alpha and beta power reflect engagement, whereas increases in gamma reflect active processing in task relevant networks (Jensen & Mazaheri, 2010; Jokisch & Jensen, 2007). Participants (n = 30) were presented with videos of an actress uttering Dutch action verbs. Speech was presented clear or degraded by applying noise-vocoding (6-band), and accompanied by videos of an actor performing iconic gesture depicting actions (clear speech+ gesture; C-SG, degraded speech+gesture; D-SG) or no gesture (clear speech only; C-S, degraded speech only; D-S). We quantified changes in time-frequency representations of oscillatory power as the video unfolded. The sources of the task-specific modulations were identified using a beamformer approach. Gestural enhancement, calculated by comparing (D-SG vs DS) to (C-SG vs CS), revealed significant interactions between the occurrence of a gesture and degraded speech particularly in the alpha, beta and gamma band. Gestural enhancement was reflected by a beta decrease in motor areas indicative of engagement of the motor system during gesture observation, especially when speech was degraded. A beta band decrease was also observed in the language network including left inferior frontal gyrus, a region involved in semantic unification operations, and left superior temporal regions. This suggests a higher semantic unification load when a gesture is presented together with degraded versus clear speech. We also observed a gestural enhancement effect in the alpha band in visual areas. This suggests that visual areas are more engaged when a gesture is present, most likely reflecting the allocation of visual attention, especially when speech is degraded, which is in line with the functional inhibition hypothesis (see Jensen & Mazaheri, 2010). Finally we observed gamma band effects in left-temporal areas suggesting facilitated binding of speech and gesture into a unified representation, especially when speech is degraded. In conclusion, our results support earlier claims on the recruitment of a left-lateralized network including motor areas, STS/ MTG and LIFG in speech-gesture integration and gestural enhancement of speech (see Ozyurek, 2014). Our findings provide novel insight into the neuronal dynamics associated with speech-gesture integration: decreases in alpha and beta power reflect the engagement of respectively the visual and language/motor networks, whereas a gamma band increase reflects the integrations in left prefrontal cortex. In future work we will characterize the interaction between these networks by means of functional connectivity analysis.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2016). Effect of language modality on development of spatial cognition and memory. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
  • Ortega, G., & Ozyurek, A. (2016). Generalisable patterns of gesture distinguish semantic categories in communication without language: Evidence from pantomime. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.

    Abstract

    There is a long-standing assumption that gestural forms are geared by a set of modes of representation (acting, representing, drawing, moulding) with each technique expressing speakers’ focus of attention on specific aspects of a referent (Müller, 2013). However, it is just recently that the relationship between gestural forms and mode of representation has been linked to 1) the semantic categories they represent (i.e., objects, actions) and 2) the affordances of the referents. Here we investigate these relations when speakers are asked to communicate about different types of referents in pantomime. This mode of communication has revealed generalisable ordering of constituents of events across speakers of different languages (Goldin- Meadow, So, Özyürek, & Mylander, 2008) but it remains an empirical question whether it also draws on systematic patterns to distinguish different semantic categories. Twenty speakers of Dutch participated in a pantomime generation task. They had to produce a gesture that conveyed the same meaning as a word on a computer screen without speaking. Participants saw 10 words from three semantic categories: actions with objects (e.g., to drink), manipulable objects (e.g., mug), and non-manipulable objects (e.g., building). Pantomimes were categorised according to their mode of representation and also the use of deictics (pointing, showing or eye gaze). Further, ordering of different representations were noted when there were more than one gesture produced. Actions with objects elicited mainly individual gestures (mean: 1.1, range: 1-2), while manipulable objects (mean: 1.8, range: 1-4) and non-manipulable objects (mean: 1.6, range: 1-4) elicited primarily more than one pantomime as sequences of interrelated gestures. Actions with objects were mostly represented with one gesture, and through re-enactment of the action (e.g., raising a closed fist to the mouth for ‘to drink’) while manipulable objects mostly were represented through an acting gesture followed by a deictic (e.g., raising a closed fist to the mouth and then pointing at the fist). Non-manipulable objects, however, were represented through a drawing gesture followed by an acting one (e.g., tracing a rectangle and then pretending to walk through a door). In the absence of language the form of gestures is constrained by objects’ affordances (i.e., manipulable or not) and the communicative need to discriminate across semantic categories (i.e., objects or action). Gestures adopt an acting or drawing mode of representation depending on the affordances of the referent; which echoes patterns observed in the forms of co-speech gestures (Masson-Carro, Goudbeek, & Krahmer, 2015). We also show for the first time that use and ordering of deictics and the different modes of representation operate in tandem to distinguish between semantically related concepts (e.g., to drink and mug). When forced to communicate without language, participants show consistent patterns in their strategies to distinguish different semantic categories
  • Ozyurek, A., & Ortega, G. (2016). Language in the visual modality: Co-speech Gesture and Sign. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.

    Abstract

    As humans, our ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures used in spoken languages. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression Co-speech gestures, though non-linguistic, are produced and perceived in tight semantic and temporal integration with speech. Thus, language—in its primary face-to-face context (both phylogenetically and ontogenetically) is a multimodal phenomenon. In fact visual modality seems to be a more common way of communication than speech -when we consider both deaf and hearing individuals. Most research in language, however, has focused mostly on spoken/written language and has rarely considered the visual context it is embedded in to understand our linguistic capacity. This talk give a brief review on what know so far about what the visual expressive resources of language look like in both spoken and sign languages and their role in communication and cognition- broadening our scope of language. We will argue, based on these recent findings, that our models of language need to take visual modes of communication into account and provide a unified framework for how semiotic and expressive resources of the visual modality are recruited both for spoken and sign languages and their consequences for processing-also considering their neural underpinnings
  • Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2016). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
  • Slonimska, A., Ozyurek, A., & Campisi, E. (2016). The role of addressee’s age in use of ostensive signals to gestures and their effectiveness. Talk presented at the 3rd Attentive Listener in the Visual World (AttLis 2016) workshop. Potsdam, Germany. 2016-03-10 - 2016-03-11.
  • Slonimska, A., Ozyurek, A., & Campisi, E. (2016). Markers of communicative intent through ostensive signals and their effectiveness in multimodal demonstrations to adults and children. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.

    Abstract

    In face-to-face interaction people adapt their multimodal message to fit their addressees’ informational needs. In doing so they are likely to mark their communicative intent by accentuating the relevant information provided by both speech and gesture. In the present study we were interested in the strategies by which speakers highlight their gestures (by means of ostensive signals like eye gaze and/or ostensive speech) for children in comparison to adults in a multimodal demonstration task. Moreover, we investigated the effectiveness of the ostensive signals to gestures and asked whether addressees shift their attention to the gestures highlighted by the speakers through different ostensive signals. Previous research has identified some of these ostensive signals (Streeck 1993; Gullberg & Kita 2009), but have not investigated how often they occur and whether they are designed for and attended to by different types of addressees. 48 Italians, born and raised in Sicily, participated in the study. 16 chosen Italian adult participants (12 female, 7 male, age range 20-30) were assigned the role of the speakers, while other 16 adults and 16 children (age range 9-10) had a role of the addressees. The task of the speaker was to describe the rules of a children’s game, which consists of using wooden blocks of different shapes to make a path without gaps. Speakers’ descriptions were coded for words and representational gestures, as well as for three types of ostensive signals highlighting the gestures – 1) eye gaze, 2) ostensive speech and 3) combination of eye gaze and ostensive speech to gesture. Addressees’ eye gaze to speakers’ gestures were coded and annotated whether eye gaze was directed to highlighted or not highlighted gesture. Overall eye gaze was the most common signal followed by ostensive speech and multimodal signals. We found that speakers were likely to highlight more gestures with children than with adults when all three types of signals were considered together. However, when treated separately, results revealed that speakers used more combined ostensive signals for children than for adults, but they were also likely to use more eye gaze towards their gestures with other adults than with children. Furthermore, both groups of addressees gazed more at gestures highlighted by the speakers in comparison to gestures that were not highlighted at all. The present study provides the first quantitative insights in regard to how speakers highlight their gestures and whether the age of the addressee influences the effectiveness of the ostensive signals. Speakers mark the communicative relevance of their gestures with different types of ostensive signals and by taking different types of addressees into account. In turn, addressees - not only adults but also children – take advantage of the provided signals to these gestures

Share this page