Asli Ozyurek

Presentations

Displaying 1 - 65 of 65
  • Campisi, E., Slonimska, A., & Özyürek, A. (2023). Cross-linguistic differences in the use of iconicity as a communicative strategy. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
  • Chen, X., Hu, J., Huettig, F., & Özyürek, A. (2023). The effect of iconic gestures on linguistic prediction in Mandarin Chinese: a visual world paradigm study. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Long, M., Özyürek, A., & Rubio-Fernández, P. (2023). Psychological proximity guides multimodal communication. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
  • Long, M., Özyürek, A., & Rubio-Fernandez, P. (2023). The role of pointing and joint attention on demonstrative use in Turkish. Poster presented at the 1st International Multimodal Communication Symposium (MMSYM 2023), Barcelona, Spain.
  • Mamus, E., Speed, L. J., Ortega, G., Majid, A., & Özyürek, A. (2023). Visual experience influences silent gesture productions across semantic categories. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
  • Mamus, E., Speed, L. J., Ortega, G., Majid, A., & Özyürek, A. (2023). Lack of visual experience influences silent gesture productions across semantic categories. Poster presented at the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023), Sydney, Australia.
  • Mamus, E., Speed, L. J., Ortega, G., Majid, A., & Özyürek, A. (2023). Gestural representations of semantic concepts differ between blind and sighted individuals. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Kırbaşoğlu, K., Ünal, E., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2022). Konuşma ve jestlerde uzamsal ifadelerin gelişimi [Development of spatial expressions on speech and gesture]. Poster presented at 3. Gelişim Psikolojisi Sempozyumu [3rd Symposium on Developmental Psychology], Istanbul, Turkey.
  • Ünal, E., Kırbaşoğlu, K., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2022). Children's multimodal spatial expressions vary across the complexity of relations. Poster presented at the 8th International Symposium on Brain and Cognitive Science, online.
  • Drijvers, L., Spaak, E., Herring, J., Ozyurek, A., & Jensen, O. (2019). Selective routing and integration of speech and gestural information studied by rapid invisible frequency tagging. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.
  • Karadöller, D. Z., Ünal, E., Sumer, B., & Ozyurek, A. (2019). Children but not adults use both speech and gesture to produce informative expressions of Left-Right relations. Poster presented at the Donders Poster Sessions 2019, Nijmegen, The Netherlands.
  • Karadöller, D. Z., Ünal, E., Sumer, B., Göksun, T., Özer, D., & Ozyurek, A. (2019). Children but not adults use both speech and gesture to produce informative expressions of Left-Right relations. Poster presented at the 44th Annual Boston University Conference on Language Development (BUCLD 44), Boston, MA, USA.
  • Mamus, E., Rissman, L., Majid, A., & Ozyurek, A. (2019). Effects of blindfolding on verbal and gestural expression of path in auditory motion events. Poster presented at the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019), Montreal, Canada.
  • Manhardt, F., Brouwer, S., Sumer, B., & Ozyurek, A. (2019). Cross-modal conceptual transfer in bimodal bilinguals. Poster presented at the LingCologne 2019 conference, Cologne, Germany.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2019). Dismantling the notion of constructed action as a metalinguistic tool: Efficient information encoding through direct representation. Poster presented at the 13th conference of Theoretical Issues in Sign Language Research (TISLR 13), Hamburg, Germany.
  • Sumer, B., Schoon, V., & Ozyurek, A. (2019). Child-directed spatial language input in sign language: Modality specific and general patterns. Poster presented at the 13th conference of Theoretical Issues in Sign Language Research (TISLR 13), Hamburg, Germany.
  • Bögels, S., Milvojevic, B., De Haas, N., Döller, C., Rasenberg, M., Ozyurek, A., Dingemanse, M., Eijk, L., Ernestus, M., Schriefers, H., Blokpoel, M., Van Rooij, I., Levinson, S. C., & Toni, I. (2018). Creating shared conceptual representations. Poster presented at the 10th Dubrovnik Conference on Cognitive Science, Dubrovnik, Croatia.
  • Drijvers, L., Spaak, E., Herring, J., Ozyurek, A., & Jensen, O. (2018). Selective routing and integration of speech and gestural information studied by rapid invisible frequency tagging. Poster presented at the Attention to Sound Meeting, Chicheley, UK.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2018). Effects of delayed sign language exposure on acquisition of static spatial relations. Poster presented at the Nijmegen Lectures 2018, Nijmegen, The Netherlands.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2018). Effects of delayed sign language exposure on spatial language acquisition by deaf children and adults. Poster presented at the 3rd International Conference on Sign Language Acquisition (ICSLA 2018), Istanbul, Turkey.
  • Manhardt, F., Sumer, B., Brouwer, S., & Ozyurek, A. (2018). Iconicity matters: Signers and speakers view spatial relations differently prior to linguistic production. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
  • Schubotz, L., Ozyurek, A., & Holler, J. (2018). Age-related differences in multimodal recipient design. Poster presented at the 10th Dubrovnik Conference on Cognitive Science, Dubrovnik, Croatia.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2018). Elicitation task for simultaneous encoding in signed languages. Poster presented at the Sign Language Acquisition and Assessment conference SLAAC, Haifa, Israel.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2018). Simultaneous information encoding in Italian Sign Language LIS: Methodology and preliminary results. Poster presented at the IMPRS Conference on Interdisciplinary Approaches in the Language Sciences, Nijmegen, The Netherlands.
  • Azar, Z., Backus, A., & Ozyurek, A. (2017). Gender effect on the choice of referring expressions: The influence of language typology and bilingualism. Poster presented at DETEC 2017: Discourse Expectations: Theoretical, Experimental and Computational perspectives, Nijmegen, The Netherlands.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2017). Alpha and beta oscillations in the language network, motor and visual cortex index semantic congruency between speech and gestures in clear and degraded speech. Poster presented at the 47th Annual Meeting of the Society for Neuroscience (SfN), Washington, DC, USA.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2017). Alpha and beta oscillations in the language network, motor and visual cortex index the semantic integration of speech and gestures in clear and degraded speech. Poster presented at the Ninth Annual Meeting of the Society for the Neurobiology of Language (SNL 2017), Baltimore, MD, USA.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2017). Low- and high-frequency oscillations predict the semantic integration of speech and gestures in clear and degraded speech. Poster presented at the Neural Oscillations in Speech and Language Processing symposium, Berlin, Germany.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2017). Effects of delayed language exposure on spatial language acquisition by signing children and adults. Poster presented at the 39th Annual Conference of the Cognitive Science Society (CogSci 2017), London, UK.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2017). Effects of delayed sign language exposure on acquisition of spatial event descriptions. Poster presented at the workshop 'Event Representations in Brain, Language & Development' (EvRep), Nijmegen, The Netherlands.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2017). Effects of delayed sign language exposure on acquisition of static spatial relations. Poster presented at the Donders Poster Sessions, Nijmegen, The Netherlands.
  • Manhardt, F., Brouwer, S., Sumer, B., & Ozyurek, A. (2017). Iconicity of linguistic expressions influences visual attention to space: A comparison between signers and speakers. Poster presented at the workshop 'Event Representations in Brain, Language & Development' (EvRep), Nijmegen, The Netherlands.
  • Manhardt, F., Brouwer, S., Sumer, B., Karadöller, D. Z., & Ozyurek, A. (2017). The influence of iconic linguistic expressions on spatial event cognition across signers and speakers: An eye-tracking study. Poster presented at the sixth meeting of Formal and Experimental Advances in Sign Language Theory (FEAST 2017), Reykjavík, Iceland.
  • Manhardt, F., Brouwer, S., Sumer, B., Karadöller, D. Z., & Ozyurek, A. (2017). The influence of iconic linguistic expressions on spatial event cognition across signers and speakers: An eye-tracking study. Poster presented at the workshop Types of Iconicity in Language Use, Development, and Processing, Nijmegen, The Netherlands.
  • Ortega, G., & Ozyurek, A. (2017). Types of iconicity and combinatorial strategies distinguish semantic categories in the manual modality across cultures. Poster presented at the 30th Annual CUNY Conference on Human Sentence Processing, Cambridge, MA, USA.
  • Ter Bekke, M., Ünal, E., Karadöller, D. Z., & Ozyurek, A. (2017). Cross-linguistic effects of speech and gesture production on memory of motion events. Poster presented at the workshop 'Event Representations in Brain, Language & Development' (EvRep), Nijmegen, The Netherlands.
  • Azar, Z., Backus, A., & Ozyurek, A. (2016). Multimodal reference tracking in Dutch and Turkish discourse: Role of culture and typological differences. Poster presented at the 7th Conference of the International Society for Gesture Studies (ISGS7), Paris, France.

    Abstract

    Previous studies show that during discourse narrations, speakers use fuller forms in speech (e.g. full noun phrase (NP) and gesture more while referring back to already introduced referents and use reduced forms in speech (e.g. overt pronoun and null pronoun) and gesture less while maintaining referents (Gullberg, 2006; Yoshioko, 2008; Debreslioska et al., 2013; Perniss & Özyürek, 2015). Thus, quantity of coding material in speech and co-speech gesture shows parallelism. However, those studies focus mostly on Indo-European languages and we do not know much about whether the parallel relation between speech and co-speech gesture during discourse narration is generalizable to languages with different pronominal systems. Furthermore, these studies have not taken into account whether a language is used in a rich or low gesture culture as a possible modulating factor. Aiming to fill this gap, we directly compare multimodal discourse narrations in Turkish and Dutch; two languages that have different constraints on the use of overt pronoun (preferred in Dutch) versus null pronoun (preferred in Turkish) and vary in terms of whether gender is marked in the pronouns (Dutch) or not (Turkish). We elicited discourse narrations in Turkey and Netherlands from 40 speakers (20 Dutch; 20 Turkish) using 2 short silent videos. Each speaker was paired with a naive addressee during data collection. We first divided the discourse into main clauses. We then coded each animate subject referring expressions for the linguistic type (i.e., NP, pronoun, null pronoun) and the co-reference context (i.e., re-introduction, maintenance). As for the co-speech gesture data, we first coded all types of gestures in order to determine whether Turkish and Dutch cultures show difference in terms of the overall gesture rate (per clause). Later we focused on the abstract deictic gestures to space that temporally align with the subject referent of each main clause to calculate the proportion of gesturally marked subject referents. Our gesture rate analyses reveal that Turkish speakers overall produce more gestures than Dutch speakers (p<.001) suggesting that Turkish is a relatively high-gesture culture compared to Dutch. Our speech analyses show that both Turkish and Dutch speakers use mainly NPs to re-introduce subject referents and reduced forms for maintained referents (null pronoun for Turkish and overt pronoun for Dutch). Our gesture analyses show that both Turkish and Dutch speakers gestured more with re-introduced subject referents when compared to maintained subject referents (p<001). However, Turkish speakers gestured more frequently with pronouns than Dutch speakers. All results put together, we show that speakers of both languages organize information structure in discourse in similar manner and vary the quantity of coding material in their speech and gesture in parallel to mark the co-reference context, a discourse strategy independent of whether the speakers are from a relatively high or low gesture culture and regardless of the differences in the pronominal system of their languages. As a novel contribution, however, we show that pragmatics interacts with contextual and linguistic factors modulating gestures: Pragmatically marked forms in speech are more likely to be marked with gestures as well (more gestures with pronouns but not with NPs in Turkish compared to Dutch).
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the 20th International Conference on Biomagnetism (BioMag 2016), Seoul, South Korea.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Face-to-face communication involves the integration of speech and visual information, such as iconic co-speech gestures. Especially iconic gestures, that illustrate object attributes, actions and space, can enhance speech comprehension in adverse listening conditions (e.g. Holle et al., 2010). Using magnetoencephalography (MEG), we aimed at identifying the networks and the neuronal dynamics associated with enhancing (degraded) speech comprehension by gestures. Our central hypothesis was that gestures enhance degraded speech comprehension, and that decreases in alpha and beta power reflect engagement, whereas increases in gamma reflect active processing in task relevant networks (Jensen & Mazaheri, 2010; Jokisch & Jensen, 2007). Participants (n = 30) were presented with videos of an actress uttering Dutch action verbs. Speech was presented clear or degraded by applying noise-vocoding (6-band), and accompanied by videos of an actor performing iconic gesture depicting actions (clear speech+ gesture; C-SG, degraded speech+gesture; D-SG) or no gesture (clear speech only; C-S, degraded speech only; D-S). We quantified changes in time-frequency representations of oscillatory power as the video unfolded. The sources of the task-specific modulations were identified using a beamformer approach. Gestural enhancement, calculated by comparing (D-SG vs DS) to (C-SG vs CS), revealed significant interactions between the occurrence of a gesture and degraded speech particularly in the alpha, beta and gamma band. Gestural enhancement was reflected by a beta decrease in motor areas indicative of engagement of the motor system during gesture observation, especially when speech was degraded. A beta band decrease was also observed in the language network including left inferior frontal gyrus, a region involved in semantic unification operations, and left superior temporal regions. This suggests a higher semantic unification load when a gesture is presented together with degraded versus clear speech. We also observed a gestural enhancement effect in the alpha band in visual areas. This suggests that visual areas are more engaged when a gesture is present, most likely reflecting the allocation of visual attention, especially when speech is degraded, which is in line with the functional inhibition hypothesis (see Jensen & Mazaheri, 2010). Finally we observed gamma band effects in left-temporal areas suggesting facilitated binding of speech and gesture into a unified representation, especially when speech is degraded. In conclusion, our results support earlier claims on the recruitment of a left-lateralized network including motor areas, STS/MTG and LIFG in speech-gesture integration and gestural enhancement of speech (see Ozyurek, 2014). Our findings provide novel insight into the neuronal dynamics associated with speech-gesture integration: decreases in alpha and beta power reflect the engagement of respectively the visual and language/motor networks, whereas a gamma band increase reflects the integrations in left prefrontal cortex. In future work we will characterize the interaction between these networks by means of functional connectivity analysis.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
  • Drijvers, L., & Ozyurek, A. (2016). Native language status of the listener modulates the neural integration of speech and gesture in clear and adverse listening conditions. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Face-to-face communication consists of integrating speech and visual input, such as co-speech gestures. Iconic gestures (e.g. a drinking gesture) can enhance speech comprehension, especially when speech is difficult to comprehend, such as in noise (e.g. Holle et al., 2010) or in non-native speech comprehension (e.g. Sueyoshi & Hardison, 2005). Previous behavioral and neuroimaging studies have argued that the integration of speech and gestures is stronger when speech intelligibility decreases (e.g. Holle et al., 2010), but that in clear speech, non-native listeners benefit more from gestures than native listeners (Dahl & Ludvigson, 2014; Sueyoshi & Hardison, 2005). So far, the neurocognitive mechanisms of how non-native speakers integrate speech and gestures in adverse listening conditions remain unknown. We investigated whether high-proficient non-native speakers of Dutch make use of iconic co-speech gestures as much as native speakers during clear and degraded speech comprehension. In an EEG study, native (n = 23) and non-native (German, n = 23) speakers of Dutch watched videos of an actress uttering Dutch action verbs. Speech was presented either as clear or degraded by applying noise-vocoding (6-band), and accompanied by a matching or mismatching iconic gesture. This allowed us to calculate both the effects of speech degradation and semantic congruency of the gesture on the N400 component. The N400 was taken as an index of semantic integration effort (Kutas & Federmeier, 2011). In native listeners, N400 amplitude was sensitive to mismatches between speech and gesture and degradation; the most pronounced N400 was found in response to degraded speech and a mismatching gesture (DMM), followed by degraded speech and a matching gesture (DM), clear speech and a mismatching gesture (CMM), and clear speech and a matching gesture (CM) (DMM>DM>CMM>CM, all p < .05). In non-native speakers, we found a difference between CMM and CM but not DMM and DM. However, degraded conditions differed from clear conditions (DMM=DM>CMM>CM, all significant comparisons p < .05). Directly comparing native to non-native speakers, the N400 effect (i.e. the difference between CMM and CM / DMM and DM) was greater for non-native speakers in clear speech, but for native speakers in degraded speech. These results provide further evidence for the claim that in clear speech, non-native speakers benefit more from gestural information than native speakers, as indexed by a larger N400 effect for mismatch manipulation. Both native and non-native speakers show integration effort during degraded speech comprehension. However, native speakers require less effort to recognize auditory cues in degraded speech than non-native speakers, resulting in a larger N400 for degraded speech and a mismatching gesture for natives than non-natives. Conversely, non-native speakers require more effort to resolve auditory cues when speech is degraded and can therefore not benefit as much from auditory cues to map the semantic information from gesture to as native speakers. In sum, non-native speakers can benefit from gestural information in speech comprehension more than native listeners, but not when speech is degraded. Our findings suggest that the native language of the listener modulates multimodal semantic integration in adverse listening conditions.
  • Drijvers, L., & Ozyurek, A. (2016). Native language status of the listener modulates the neural integration of speech and gesture in clear and adverse listening conditions. Poster presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC), Nijmegen, The Netherlands.
  • Drijvers, L., & Ozyurek, A. (2016). What do iconic gestures and visible speech contribute to degraded speech comprehension?. Poster presented at the Nijmegen Lectures 2016, Nijmegen, The Netherlands.
  • Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do gestures and lip movements contribute to degraded speech comprehension?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
  • Drjiver, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Face-to-face communication involves the integration of speech and visual information, such as iconic co-speech gestures. Especially iconic gestures, that illustrate object attributes, actions and space, can enhance speech comprehension in adverse listening conditions (e.g. Holle et al., 2010). Using magnetoencephalography (MEG), we aimed at identifying the networks and the neuronal dynamics associated with enhancing (degraded) speech comprehension by gestures. Our central hypothesis was that gestures enhance degraded speech comprehension, and that decreases in alpha and beta power reflect engagement, whereas increases in gamma reflect active processing in task relevant networks (Jensen & Mazaheri, 2010; Jokisch & Jensen, 2007). Participants (n = 30) were presented with videos of an actress uttering Dutch action verbs. Speech was presented clear or degraded by applying noise-vocoding (6-band), and accompanied by videos of an actor performing iconic gesture depicting actions (clear speech+ gesture; C-SG, degraded speech+gesture; D-SG) or no gesture (clear speech only; C-S, degraded speech only; D-S). We quantified changes in time-frequency representations of oscillatory power as the video unfolded. The sources of the task-specific modulations were identified using a beamformer approach. Gestural enhancement, calculated by comparing (D-SG vs DS) to (C-SG vs CS), revealed significant interactions between the occurrence of a gesture and degraded speech particularly in the alpha, beta and gamma band. Gestural enhancement was reflected by a beta decrease in motor areas indicative of engagement of the motor system during gesture observation, especially when speech was degraded. A beta band decrease was also observed in the language network including left inferior frontal gyrus, a region involved in semantic unification operations, and left superior temporal regions. This suggests a higher semantic unification load when a gesture is presented together with degraded versus clear speech. We also observed a gestural enhancement effect in the alpha band in visual areas. This suggests that visual areas are more engaged when a gesture is present, most likely reflecting the allocation of visual attention, especially when speech is degraded, which is in line with the functional inhibition hypothesis (see Jensen & Mazaheri, 2010). Finally we observed gamma band effects in left-temporal areas suggesting facilitated binding of speech and gesture into a unified representation, especially when speech is degraded. In conclusion, our results support earlier claims on the recruitment of a left-lateralized network including motor areas, STS/ MTG and LIFG in speech-gesture integration and gestural enhancement of speech (see Ozyurek, 2014). Our findings provide novel insight into the neuronal dynamics associated with speech-gesture integration: decreases in alpha and beta power reflect the engagement of respectively the visual and language/motor networks, whereas a gamma band increase reflects the integrations in left prefrontal cortex. In future work we will characterize the interaction between these networks by means of functional connectivity analysis.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2016). Effect of language modality on development of spatial cognition and memory. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
  • Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2016). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
  • Sumer, B., Zwisterlood, I., & Ozyurek, A. (2016). Hands in motion: Learning to express motion events in a sign and a spoken language. Poster presented at the 12th International Conference on Theoretical Issues in Sign Language Research (TISLR12), Melbourne, Australia.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The neural integration of pointing gesture and speech in a visual context: An fMRI study. Poster presented at the 7th Annual Society for the Neurobiology of Language Conference (SNL 2015), Chigaco, USA.
  • Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at Donders Sessions 2015, Nijmegen, The Netherlands.
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Beat gestures modulate the processing focused and non-focused words in context. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Information in language is organized according to a principle called information structure: new and important information (focus) is highlighted and distinguished from less important information (non-focus). Most studies so far have been concerned with how focused information is emphasized linguistically and suggest that listeners expect focus to be accented and process it more deeply than non-focus (Wang et al., 2011). Little is known about how listeners deal with non-verbal cues like beat gestures, which also emphasize the words they accompany, similarly to pitch accent. ERP studies suggest that beat gestures facilitate the processing of phonological, syntactic, and semantic aspects of speech (Biau, & Soto-Faraco, 2013; Holle et al., 2012; Wang & Chu, 2013). It is unclear whether listeners expect beat gestures to be aligned with the information structure of the message. The present ERP study addresses this question by testing whether beat gestures modulate the processing of accented-focused vs. unaccented-non focused words in context in a similar way. Participantswatched movies with short dialogues and performed a comprehension task. In each dialogue, the answer “He bought the books via amazon” contained a target word (“books”) which was combined with a beat gesture, a control hand movement (e.g., self touching movement) or no gesture. Based on the preceding context, the target word was either in focus and accented, when preceded by a question like “Did the student buy the books or the magazines via Amazon?”, or the target word was in non-focus and unaccented, when preceded by a question like “Did the student buy the books via Amazon or via Marktplaats?”. The gestures started 500 ms prior to the target word. All gesture parameters (hand shape, naturalness, emphasis, duration, and gesture-speech alignment) were determined in behavioural tests. ERPs were time-locked to gesture onset to examine gesture effects, and to target word onset for pitch accent effects. We applied a cluster-based random permutation analysis to test for main effects and gesture-accent interactions in both time-locking procedures. We found that accented words elicited a positive main effect between 300-600 ms post target onset. Words accompanied by a beat gesture and a control movement elicited sustained positivities between 200-1300 ms post gesture onset. These independent effects of pitch accent and beat gesture are in line with previous findings (Dimitrova et al., 2012; Wang & Chu, 2013). We also found an interaction between control gesture and pitch accent (1200-1300 ms post gesture onset), showing that accented words accompanied by a control movement elicited a negativity relative to unaccented words. The present data show that beat gestures do not differentially modulate the processing of accented-focused vs. unaccented-non focused words. Beat gestures engage a positive and long lasting neural signature, which appears independent from the information structure of the message. Our study suggests that non-verbal cues like beat gestures play a unique role in emphasizing information in speech.
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Independent effects of beat gesture and pitch accent on processing words in context. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language [SNL2014], Amsterdam, the Netherlands.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2014). The impact of age and mutually shared knowledge on multi-modal utterance design. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.

    Abstract

    Previous work suggests that the communicative behavior
    of older adults differs systematically from that of younger
    adults. For instance, older adults produce significantly fewer
    representational gestures than younger adults in monologue
    description tasks (Cohen & Borsoi, 1996; Feyereisen &
    Havard, 1999). In addition, older adults seem to have more
    difficulty than younger adults in establishing common ground
    (i.e. knowledge, assumptions, and beliefs mutually shared
    between a speaker and an addressee, Clark, 1996) in speech
    in a referential communication paradigm (Horton & Spieler,
    2007). Here we investigated whether older adults take such
    common ground into account when designing multi-modal
    utterances for an addressee. The present experiment com-
    pared the speech and co-speech gesture production of two age
    groups (young: 20-30 years, old: 65-75 years) in an inter-
    active setting, manipulating the amount of common ground
    between participants.
    Thirty-two pairs of nave participants (16 young, 16 old,
    same-age-pairs only) took part in the experiment. One of the
    participants (the speaker) narrated short cartoon stories to the
    other participant (the addressee) (task 1) and gave instruc-
    tions on how to assemble a 3D model from wooden building
    blocks (task 2). In both tasks, we varied the amount of infor-
    mation mutually shared between the two participants (com-
    mon ground manipulation). Additionally, we also obtained a
    range of cognitive measures from the speaker: verbal work-
    ing memory (operation span task), visual working memory
    (visual patterns test and Corsi block test), processing speed
    and executive functioning (trail making test parts A + B) and
    a semantic fluency measure (animal naming task). Prelimi-
    nary data analysis of about half the final sample suggests that
    overall, speakers use fewer words per narration/instruction
    when there is shared knowledge with the addressee, in line
    with previous findings (e.g. Clark & Wilkes-Gibbs, 1986).
    This effect is larger for young than for old adults, potentially
    indicating that older adults have more difficulties taking com-
    mon ground into account when formulating utterances. Fur-
    ther, representational co-speech gestures were produced at the
    same rate by both age groups regardless of common ground
    condition in the narration task (in line with Campisi & zyrek,
    2013). In the building block task, however, the trend for the
    young adults is to gesture at a higher rate in the common
    ground condition, suggesting that they rely more on the vi-
    sual modality here (cf. Holler & Wilkin, 2009). The same
    trend could not be found for the old adults. Within the next
    three months, we will extend our analysis a) by taking a wider
    range of gesture types (interactive gestures, beats) into ac-
    count and b) by looking at qualitative features of speech (in-
    formation content) and co-speech gestures (size, shape, tim-
    ing). Finally, we will correlate the resulting data with the data
    from the cognitive tests.
    This study will contribute to a better understanding of the
    communicative strategies of a growing aging population as
    well as to the body of research on co-speech gesture use in
    addressee design. It also addresses the relationship between
    cognitive abilities on the one hand and co-speech gesture
    production on the other hand, potentially informing existing
    models of co-speech gesture production.
  • Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. Poster presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013), Berlin, Germany.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). The influence of communicative intent on the form of pointing gestures. Poster presented at the Fifth Joint Action Meeting (JAM5), Berlin, Germany.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the EPS workshop 'What if.. the study of language started from the investigation of signed, rather than spoken language?, London, UK.
  • Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: Speakers' gaze direction modulates the integration of speech and iconic gestures in the rigth MTG. Poster presented at the 4th Annual Neurobiology of Language Conference (NLC 2012), San Sebastian, Spain.
  • Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: The role of the Middle Temporal Gyrus in the integration of gaze, gesture and speech. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
  • Peeters, D., Ozyurek, A., & Hagoort, P. (2012). Behavioral and neural correlates of deictic reference. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.
  • Peeters, D., Ozyurek, A., & Hagoort, P. (2012). The comprehension of exophoric reference: An ERP study. Poster presented at the Fourth Annual Neurobiology of Language Conference (NLC), San Sebastian, Spain.

    Abstract

    An important property of language is that it can be used exophorically, for instance in referring to entities in the extra-linguistic context of a conversation using demonstratives such as “this” and “that”. Despite large-scale cross-linguistic descriptions of demonstrative systems, the mechanisms underlying the comprehension of such referential acts are poorly understood. Therefore, we investigated the neural mechanisms underlying demonstrative comprehension in situated contexts. Twenty-three participants were presented on a computer screen with pictures containing a speaker and two similar objects. One of the objects was close to the speaker, whereas the other was either distal from the speaker but optically close to the participant (“sagittal orientation”), or distal from both (“lateral orientation”). The speaker pointed to one object, and participants heard sentences spoken by the speaker containing a proximal (“this”) or distal (“that”) demonstrative, and a correct or incorrect noun-label (i.e., a semantic violation). EEG was recorded continuously and time-locked to the onset of demonstratives and nouns. Semantic violations on the noun-label yielded a significant, wide-spread N400 effect, regardless of the objects’ orientation. Comparing the comprehension of proximal to distal demonstratives in the sagittal orientation yielded a similar N400 effect, both for the close and the far referent. Interestingly, no demonstrative effect was found when objects were oriented laterally. Our findings suggest a similar time-course for demonstrative and noun-label processing. However, the comprehension of demonstratives depends on the spatial orientation of potential referents, whereas noun-label comprehension does not. These findings reveal new insights about the mechanisms underlying everyday demonstrative comprehension.
  • Peeters, D., & Ozyurek, A. (2012). The role of contextual factors in the use of demonstratives: Differences between Turkish and Dutch. Poster presented at The IMPRS Relations in Relativity Workshop, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
  • Peeters, D., & Ozyurek, A. (2011). Demonstrating the importance of joint attention in the use of demonstratives: The case of Turkish. Poster presented at The 4th Biennial Conference of Experimental Pragmatics [XPRAG 2011], Barcelona, Spain.
  • Brown, A., Ozyurek, A., Allen, S., Kita, S., Ishizuka, T., & Furman, R. (2004). Does event structure influence children's motion event expressions. Poster presented at 29th Boston University Conference on Language Development, Boston.

    Abstract

    This study focuses on understanding of event structure, in particular therelationship between Manner and Path. Narratives were elicited from twenty 3-year-olds and twenty adults using 6 animated motion events that were divided into two groups based on Goldberg's (1997) distinction between causal (Manner-inherent; e.g. roll down) and non-causal (Manner-incidental; e.g. spin while going up) relationships between Manner and Path. The data revealed that adults and children are sensitive to differences between inherent and incidental Manner. Adults significantly reduced use of canonical syntactic constructions for Manner-incidental events, employing other constructions. Children, however, while significantly reducing use of canonical syntactic constructionsfor Manner-incidental events, did not exploit alternative constructions. Instead, they omitted Manner from their speech altogether. A follow-up lexical task showed that children had knowledge of all omitted Manners. Given that this strategic omission of Manner is not lexically motivated, the results are discussed in relation to implications for pragmatics and memory load.

Share this page