Asli Ozyurek

Presentations

Displaying 101 - 142 of 142
  • Sumer, B., Zwisterlood, I., & Ozyurek, A. (2016). Hands in motion: Learning to express motion events in a sign and a spoken language. Poster presented at the 12th International Conference on Theoretical Issues in Sign Language Research (TISLR12), Melbourne, Australia.
  • Azar, Z., Backus, A., & Ozyurek, A. (2015). Multimodal reference tracking in monolingual and bilingual discourse. Talk presented at the Nijmegen-Tilburg Multimodality Workshop. Tilburg, The Netherlands. 2015-10-22.
  • Drijvers, L., & Ozyurek, A. (2015). Visible speech enhanced: What do gestures and lips contribute to speech comprehension in noise?. Talk presented at the Nijmegen-Tilburg Multi-modality workshop. Tilburg, The Netherlands. 2015-10-22.
  • Ozyurek, A. (2015). The role of gesture in language evolution: Beyond the gesture-first hypotheses. Talk presented at the SMART Cognitive Science: the Amsterdam Conference – Workshop, Evolution of Language: The co-evolution of biology and culture. Amsterdam, the Netherlands. 2015-03-25 - 2015-03-26.

    Abstract

    It has been a popular view to propose that gesture preceded and paved the way for the evolution of (spoken) language (e.g., Corballis, Tomasello, Arbib). However these views do not take into account the recent findings on the neural and cognitive infrastructure of how modern humans (adults and children) use gestures in various communicative contexts. Based on this current knowledge I will revisit gesture-first theories of language evolution and discuss alternatives more compatible with the multimodal nature of modern human language
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal gyrus in the integration of pointing gestures and speech. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-04.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The neural integration of pointing gesture and speech in a visual context: An fMRI study. Poster presented at the 7th Annual Society for the Neurobiology of Language Conference (SNL 2015), Chigaco, USA.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-05.
  • Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at Donders Sessions 2015, Nijmegen, The Netherlands.
  • Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Talk presented at Donders Discussions 2015. Nijmegen, The Netherlands. 2015-11-05.
  • Slonimska, A., Ozyurek, A., & Campisi, E. (2015). Markers of communicative relevance of gesture. Talk presented at the “Nijmegen-Tilburg Multi-modality“ workshop. Tilburg, The Netherlands. 2015-10-24.
  • Slonimska, A., Ozyurek, A., & Campisi, E. (2015). Ostensive signals: Markers of communicative relevance of gesture during demonstration to adults and children. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-04.
  • Azar, Z., Backus, A., & Ozyurek, A. (2014). Discourse management: Reference tracking of subject referents in speech and gesture in Turkish narratives. Talk presented at the 17th International Conference on Turkish Linguistics. Rouen, France. 2014-09-03 - 2014-09-05.
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Beat gestures modulate the processing focused and non-focused words in context. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Information in language is organized according to a principle called information structure: new and important information (focus) is highlighted and distinguished from less important information (non-focus). Most studies so far have been concerned with how focused information is emphasized linguistically and suggest that listeners expect focus to be accented and process it more deeply than non-focus (Wang et al., 2011). Little is known about how listeners deal with non-verbal cues like beat gestures, which also emphasize the words they accompany, similarly to pitch accent. ERP studies suggest that beat gestures facilitate the processing of phonological, syntactic, and semantic aspects of speech (Biau, & Soto-Faraco, 2013; Holle et al., 2012; Wang & Chu, 2013). It is unclear whether listeners expect beat gestures to be aligned with the information structure of the message. The present ERP study addresses this question by testing whether beat gestures modulate the processing of accented-focused vs. unaccented-non focused words in context in a similar way. Participantswatched movies with short dialogues and performed a comprehension task. In each dialogue, the answer “He bought the books via amazon” contained a target word (“books”) which was combined with a beat gesture, a control hand movement (e.g., self touching movement) or no gesture. Based on the preceding context, the target word was either in focus and accented, when preceded by a question like “Did the student buy the books or the magazines via Amazon?”, or the target word was in non-focus and unaccented, when preceded by a question like “Did the student buy the books via Amazon or via Marktplaats?”. The gestures started 500 ms prior to the target word. All gesture parameters (hand shape, naturalness, emphasis, duration, and gesture-speech alignment) were determined in behavioural tests. ERPs were time-locked to gesture onset to examine gesture effects, and to target word onset for pitch accent effects. We applied a cluster-based random permutation analysis to test for main effects and gesture-accent interactions in both time-locking procedures. We found that accented words elicited a positive main effect between 300-600 ms post target onset. Words accompanied by a beat gesture and a control movement elicited sustained positivities between 200-1300 ms post gesture onset. These independent effects of pitch accent and beat gesture are in line with previous findings (Dimitrova et al., 2012; Wang & Chu, 2013). We also found an interaction between control gesture and pitch accent (1200-1300 ms post gesture onset), showing that accented words accompanied by a control movement elicited a negativity relative to unaccented words. The present data show that beat gestures do not differentially modulate the processing of accented-focused vs. unaccented-non focused words. Beat gestures engage a positive and long lasting neural signature, which appears independent from the information structure of the message. Our study suggests that non-verbal cues like beat gestures play a unique role in emphasizing information in speech.
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Independent effects of beat gesture and pitch accent on processing words in context. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th International Society for Gesture Studies Congress. San Diego, California, USA. 2014-07-08 - 2014-07-11.

    Abstract

    Background: Gesture and speech are theorized to form a sin- gle integrated system of meaning during language produc- tion (McNeill, 1992), and evidence is mounting that this in- tegration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multi- modal videos that presented auditory (words) and visual (ges- tures and actions on objects) information. Half the subjects related the audio information to a written prime presented be- fore the video, and the other half related the visual informa- tion to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one but- ton if the written prime was the same as the visual (31 sub- jects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming se- quence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual tar- get tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p < .001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying ac- tions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and atten- tional processes). However, there were no significant differ- ences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and ex- tends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in rela- tion to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS 6). San Diego, CA, USA. 2014-07-08 - 2014-07-11.

    Abstract

    Background: Gesture and speech are theorized to form a single integrated system of meaning during language production (McNeill, 1992), and evidence is mounting that this integration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one button if the written prime was the same as the visual (31 subjects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming sequence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual target tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p<.001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying actions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and attentional processes). However, there were no significant differences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and extends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in relation to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language [SNL2014], Amsterdam, the Netherlands.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS6). San Diego, Cal. 2014-07-08 - 2014-07-11.
  • Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. Talk presented at the 36th Annual Meeting of the Cognitive Science Society (CogSci2014). Québec City, Canada. 2014-07-23 - 2014-07-26.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2014). The impact of age and mutually shared knowledge on multi-modal utterance design. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.

    Abstract

    Previous work suggests that the communicative behavior
    of older adults differs systematically from that of younger
    adults. For instance, older adults produce significantly fewer
    representational gestures than younger adults in monologue
    description tasks (Cohen & Borsoi, 1996; Feyereisen &
    Havard, 1999). In addition, older adults seem to have more
    difficulty than younger adults in establishing common ground
    (i.e. knowledge, assumptions, and beliefs mutually shared
    between a speaker and an addressee, Clark, 1996) in speech
    in a referential communication paradigm (Horton & Spieler,
    2007). Here we investigated whether older adults take such
    common ground into account when designing multi-modal
    utterances for an addressee. The present experiment com-
    pared the speech and co-speech gesture production of two age
    groups (young: 20-30 years, old: 65-75 years) in an inter-
    active setting, manipulating the amount of common ground
    between participants.
    Thirty-two pairs of nave participants (16 young, 16 old,
    same-age-pairs only) took part in the experiment. One of the
    participants (the speaker) narrated short cartoon stories to the
    other participant (the addressee) (task 1) and gave instruc-
    tions on how to assemble a 3D model from wooden building
    blocks (task 2). In both tasks, we varied the amount of infor-
    mation mutually shared between the two participants (com-
    mon ground manipulation). Additionally, we also obtained a
    range of cognitive measures from the speaker: verbal work-
    ing memory (operation span task), visual working memory
    (visual patterns test and Corsi block test), processing speed
    and executive functioning (trail making test parts A + B) and
    a semantic fluency measure (animal naming task). Prelimi-
    nary data analysis of about half the final sample suggests that
    overall, speakers use fewer words per narration/instruction
    when there is shared knowledge with the addressee, in line
    with previous findings (e.g. Clark & Wilkes-Gibbs, 1986).
    This effect is larger for young than for old adults, potentially
    indicating that older adults have more difficulties taking com-
    mon ground into account when formulating utterances. Fur-
    ther, representational co-speech gestures were produced at the
    same rate by both age groups regardless of common ground
    condition in the narration task (in line with Campisi & zyrek,
    2013). In the building block task, however, the trend for the
    young adults is to gesture at a higher rate in the common
    ground condition, suggesting that they rely more on the vi-
    sual modality here (cf. Holler & Wilkin, 2009). The same
    trend could not be found for the old adults. Within the next
    three months, we will extend our analysis a) by taking a wider
    range of gesture types (interactive gestures, beats) into ac-
    count and b) by looking at qualitative features of speech (in-
    formation content) and co-speech gestures (size, shape, tim-
    ing). Finally, we will correlate the resulting data with the data
    from the cognitive tests.
    This study will contribute to a better understanding of the
    communicative strategies of a growing aging population as
    well as to the body of research on co-speech gesture use in
    addressee design. It also addresses the relationship between
    cognitive abilities on the one hand and co-speech gesture
    production on the other hand, potentially informing existing
    models of co-speech gesture production.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., & Ozyurek, A. (2013). Multi-modal language comprehension as a joint activity: The influence of eye gaze on the processing of speech and co-speech gesture in multi-party communication. Talk presented at the 5th Joint Action Meeting. Berlin. 2013-07-26 - 2013-07-29.

    Abstract

    Traditionally, language comprehension has been studied as a solitary and unimodal activity. Here, we investigate language comprehension as a joint activity, i.e., in a dynamic social context involving multiple participants in different roles with different perspectives, while taking into account the multimodal nature of facetoface communication. We simulated a triadic communication context involving a speaker alternating her gaze between two different recipients, conveying information not only via speech but gesture as well. Participants thus viewed videorecorded speechonly or speech+gesture utterances referencing objects (e.g., “he likes the laptop”/+TYPING ON LAPTOPgesture) when being addressed (direct gaze) or unaddressed (averted gaze). The videoclips were followed by two object images (laptoptowel). Participants’ task was to choose the object that matched the speaker’s message (i.e., laptop). Unaddressed recipients responded significantly slower than addressees for speechonly utterances. However, perceiving the same speech accompanied by gestures sped them up to levels identical to that of addressees. Thus, when speech processing suffers due to being unaddressed, gestures become more prominent and boost comprehension of a speaker’s spoken message. Our findings illuminate how participants process multimodal language and how this process is influenced by eye gaze, an important social cue facilitating coordination in the joint activity of conversation.
  • Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. Poster presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013), Berlin, Germany.
  • Holler, J., Kelly, S., Hagoort, P., Schubotz, L., & Ozyurek, A. (2013). Speakers' social eye gaze modulates addressed and unaddressed recipients' comprehension of gesture and speech in multi-party communication. Talk presented at the 5th Biennial Conference of Experimental Pragmatics (XPRAG 2013). Utrecht, The Netherlands. 2013-09-04 - 2013-09-06.
  • Ortega, G., & Ozyurek, A. (2013). Gesture-sign interface in hearing non-signers' first exposure to sign. Talk presented at the Tilburg Gesture Research Meeting [TiGeR 2013]. Tilburg, the Netherlands. 2013-06-19 - 2013-06-21.

    Abstract

    Natural sign languages and gestures are complex communicative systems that allow the incorporation of features of a referent into their structure. They differ, however, in that signs are more conventionalised because they consist of meaningless phonological parameters. There is some evidence that despite non-signers finding iconic signs more memorable they can have more difficulty at articulating their exact phonological components. In the present study, hearing non-signers took part in a sign repetition task in which they had to imitate as accurately as possible a set of iconic and arbitrary signs. Their renditions showed that iconic signs were articulated significantly less accurately than arbitrary signs. Participants were recalled six months later to take part in a sign generation task. In this task, participants were shown the English translation of the iconic signs they imitated six months prior. For each word, participants were asked to generate a sign (i.e., an iconic gesture). The handshapes produced in the sign repetition and sign generation tasks were compared to detect instances in which both renditions presented the same configuration. There was a significant correlation between articulation accuracy in the sign repetition task and handshape overlap. These results suggest some form of gestural interference in the production of iconic signs by hearing non-signers. We also suggest that in some instances non-signers may deploy their own conventionalised gesture when producing some iconic signs. These findings are interpreted as evidence that non-signers process iconic signs as gestures and that in production, only when sign and gesture have overlapping features will they be capable of producing the phonological components of signs accurately.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the form of pointing gestures. Talk presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013). Berlin, Germany. 2013-08-01 - 2013-08-03.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). The influence of communicative intent on the form of pointing gestures. Poster presented at the Fifth Joint Action Meeting (JAM5), Berlin, Germany.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the EPS workshop 'What if.. the study of language started from the investigation of signed, rather than spoken language?, London, UK.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). The influence of gaze direction on the comprehension of speech and gesture in triadic communication. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012). Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    Human face-to-face communication is a multi-modal activity. Recent research has shown that, during comprehension, recipients integrate information from speech with that contained in co-speech gestures (e.g., Kelly et al., 2010). The current studies take this research one step further by investigating the influence of another modality, namely eye gaze, on speech and gesture comprehension, to advance our understanding of language processing in more situated contexts. In spite of the large body of literature on processing of eye gaze, very few studies have investigated its processing in the context of communication (but see, e.g., Staudte & Crocker, 2011 for an exception). In two studies we simulated a triadic communication context in which a speaker alternated their gaze between our participant and another (alleged) participant. Participants thus viewed speech-only or speech + gesture utterances either in the role of addressee (direct gaze) or in the role of unaddressed recipient (averted gaze). In Study 1, participants (N = 32) viewed video-clips of a speaker producing speech-only (e.g. “she trained the horse”) or speech+gesture utterances conveying complementary information (e.g. “she trained the horse”+WHIPPING gesture). Participants were asked to judge whether a word displayed on screen after each video-clip matched what the speaker said or not. In half of the cases, the word matched a previously uttered word, requiring a “yes” answer. In all other cases, the word matched the meaning of the gesture the actor had performed, thus requiring a ‘no’ answer.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 5th Conference of the International Society for Gesture Studies (ISGS 5). Lund, Sweden. 2012-07-24 - 2012-07-27.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012). Sapporo, Japan. 2012-08-01 - 2012-08-04.
  • Kelly, S., Ozyurek, A., Healey, M., & Holler, J. (2012). The communicative influence of gesture and action during speech comprehension: Gestures have the upper hand. Talk presented at the Acoustics 2012 Hong Kong Conference and Exhibition. Hong Kong. 2012-05-13 - 2012-05-18.
  • Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: Speakers' gaze direction modulates the integration of speech and iconic gestures in the rigth MTG. Poster presented at the 4th Annual Neurobiology of Language Conference (NLC 2012), San Sebastian, Spain.
  • Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: The role of the Middle Temporal Gyrus in the integration of gaze, gesture and speech. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
  • Peeters, D., Ozyurek, A., & Hagoort, P. (2012). Behavioral and neural correlates of deictic reference. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.
  • Peeters, D., Ozyurek, A., & Hagoort, P. (2012). The comprehension of exophoric reference: An ERP study. Poster presented at the Fourth Annual Neurobiology of Language Conference (NLC), San Sebastian, Spain.

    Abstract

    An important property of language is that it can be used exophorically, for instance in referring to entities in the extra-linguistic context of a conversation using demonstratives such as “this” and “that”. Despite large-scale cross-linguistic descriptions of demonstrative systems, the mechanisms underlying the comprehension of such referential acts are poorly understood. Therefore, we investigated the neural mechanisms underlying demonstrative comprehension in situated contexts. Twenty-three participants were presented on a computer screen with pictures containing a speaker and two similar objects. One of the objects was close to the speaker, whereas the other was either distal from the speaker but optically close to the participant (“sagittal orientation”), or distal from both (“lateral orientation”). The speaker pointed to one object, and participants heard sentences spoken by the speaker containing a proximal (“this”) or distal (“that”) demonstrative, and a correct or incorrect noun-label (i.e., a semantic violation). EEG was recorded continuously and time-locked to the onset of demonstratives and nouns. Semantic violations on the noun-label yielded a significant, wide-spread N400 effect, regardless of the objects’ orientation. Comparing the comprehension of proximal to distal demonstratives in the sagittal orientation yielded a similar N400 effect, both for the close and the far referent. Interestingly, no demonstrative effect was found when objects were oriented laterally. Our findings suggest a similar time-course for demonstrative and noun-label processing. However, the comprehension of demonstratives depends on the spatial orientation of potential referents, whereas noun-label comprehension does not. These findings reveal new insights about the mechanisms underlying everyday demonstrative comprehension.
  • Peeters, D., & Ozyurek, A. (2012). The role of contextual factors in the use of demonstratives: Differences between Turkish and Dutch. Talk presented at the 6th Lodz Symposium: New Developments in Linguistic Pragmatics. Lodz, Poland. 2012-05-26 - 2012-05-28.

    Abstract

    An important feature of language is that it enables human beings to refer to entities, actions and events in the external world. In everyday interaction, one can refer to concrete entities in the extra-linguistic physical environment of a conversation by using demonstratives such as this and that. Traditionally, the choice of which demonstrative to use has been explained in terms of the distance of the referent [1]. In contrast, recent observational studies in different languages have suggested that factors such as joint attention also play an important role in demonstrative choice [2][3]. These claims have never been tested in a controlled setting and across different languages. There-fore, we tested demonstrative choice in a controlled elicitation task in two languages that previously have only been studied observational-ly: Turkish and Dutch. In our study, twenty-nine Turkish and twenty-four Dutch partic-ipants were presented with pictures including a speaker, an address-ee and an object (the referent). They were asked which demonstra-tive they would use in the depicted situations. Besides the distance of the referent, we manipulated the addressee’s focus of visual atten-tion, the presence of a pointing gesture, and the sentence type. A re-peated measures analysis of variance showed that, in addition to the distance of the referent, the focus of attention of the addressee on the referent and the type of sentence in which a demonstrative was used, influenced demonstrative choice in Turkish. In Dutch, only the dis-tance of the referent and the sentence type influenced demonstrative choice. Our cross-linguistic findings show that in different languages, people take into account both similar and different aspects of triadic situations to select a demonstrative. These findings reject descrip-tions of demonstrative systems that explain demonstrative choice in terms of one single variable, such as distance. The controlled study of referring acts in triadic situations is a valuable extension to observa-tional research, in that it gives us the possibility to look more specifi-cally into the interplay between language, attention, and other con-textual factors influencing how people refer to entities in the world References: [1] Levinson, S. C. (1983). Pragmatics. Cambridge: Cambridge University Press. [2] Diessel, H. (2006). Demonstratives, joint attention and the emergence of grammar. Cognitive Linguistics 17:4. 463–89. [3] Küntay, A. C. & Özyürek, A. (2006). Learning to use demonstratives in conversation: what do language specific strategies in Turkish reveal? Journal of Child Language 33. 303–320.
  • Peeters, D., & Ozyurek, A. (2012). The role of contextual factors in the use of demonstratives: Differences between Turkish and Dutch. Poster presented at The IMPRS Relations in Relativity Workshop, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
  • Ozyurek, A. (2011). Language in our hands: The role of the body in language, cognition and communication [Inaugural lecture]. Talk presented at The Radboud University Nijmegen. Nijmegen, The Netherlands. 2011-05-26.
  • Peeters, D., & Ozyurek, A. (2011). Demonstrating the importance of joint attention in the use of demonstratives: The case of Turkish. Poster presented at The 4th Biennial Conference of Experimental Pragmatics [XPRAG 2011], Barcelona, Spain.
  • Nyst, V., De Vos, C., Perniss, P. M., & Ozyurek, A. (2007). The typology of space in sign languages: Developing a descriptive format for cross-linguistic comparison. Talk presented at Cross-Linguistic Research on Sign Languages 2. Max Planck Institute for Psycholinguistics, Nijmegen. 2007-04-13.
  • Brown, A., Ozyurek, A., Allen, S., Kita, S., Ishizuka, T., & Furman, R. (2004). Does event structure influence children's motion event expressions. Poster presented at 29th Boston University Conference on Language Development, Boston.

    Abstract

    This study focuses on understanding of event structure, in particular therelationship between Manner and Path. Narratives were elicited from twenty 3-year-olds and twenty adults using 6 animated motion events that were divided into two groups based on Goldberg's (1997) distinction between causal (Manner-inherent; e.g. roll down) and non-causal (Manner-incidental; e.g. spin while going up) relationships between Manner and Path. The data revealed that adults and children are sensitive to differences between inherent and incidental Manner. Adults significantly reduced use of canonical syntactic constructions for Manner-incidental events, employing other constructions. Children, however, while significantly reducing use of canonical syntactic constructionsfor Manner-incidental events, did not exploit alternative constructions. Instead, they omitted Manner from their speech altogether. A follow-up lexical task showed that children had knowledge of all omitted Manners. Given that this strategic omission of Manner is not lexically motivated, the results are discussed in relation to implications for pragmatics and memory load.

Share this page