Judith Holler

Presentations

Displaying 1 - 82 of 82
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2020). The predictive potential of hand gestures during conversation: An investigation of the timing of gestures in relation to speech. Talk presented at Gesture and Speech in Interaction Conference. Stockholm, Sweden. 2020-09-07 - 2020-09-09.
  • Holler, J. (2019). Multimodal signalling for coordination in conversational interaction [keynote]. Talk presented at 7th conference of the Scandinavian Association for Language and Cognition (SALC7). Aarhus, Denmark. 2019-05-22 - 2019-05-24.
  • Holler, J. (2019). Visual bodily signals for coordination in conversation. [invited talk] Contribution to the symposium 'Beyond words: Nonliteral and nonverbal aspects of dialogue'. Talk presented at the 9th Annual Meeting of the Society for Text & Discourse (ST&D9). New York, NY, USA. 2019-07-09 - 2019-07-11.
  • Holler, J. (2019). Timing in speech, gesture and interaction. [keynote]. Talk presented at the Lorentz Centre workshop: Synchrony and Rhythmic Interaction: From Neurons to Ecology. Leiden, The Netherlands. 2019-07-29 - 2019-08-02.
  • Blokpoel, M., Dingemanse, M., Kachergis, G., Bögels, S., Drijvers, L., Eijk, L., Ernestus, M., De Haas, N., Holler, J., Levinson, S. C., Lui, R., Milivojevic, B., Neville, D., Ozyurek, A., Rasenberg, M., Schriefers, H., Trujillo, J. P., Winner, T., Toni, I., & Van Rooij, I. (2018). Ambiguity helps higher-order pragmatic reasoners communicate. Talk presented at the 14th biannual conference of the German Society for Cognitive Science, GK (KOGWIS 2018). Darmstadt, Germany. 2018-09-03 - 2018-09-06.
  • Holler, J. (2018). Coordinating minds and social interaction with the body. [keynote]. Talk presented at the 8th International Society for Gesture Studies conference (ISGS). Cape Town, South Africa. 2018-07-04 - 2018-07-08.
  • Holler, J. (2018). Multimodal language in interaction. [invited talk]. Talk presented at the Department of Translation and Language Sciences, University Pompeu Fabra. Barcelona, Spain. 2018-06-27.
  • Holler, J. (2018). Multimodal pragmatics: Language and the body in interaction. [keynote]. Talk presented at the 10th Dubrovnik Conference in Cognitive Science on Communication, Pragmatics, and Theory of Mind (DUCOGX). Dubrovnik, Croatia. 2018-05-24 - 2018-05-27.
  • Holler, J. (2018). Multimodal pragmatics: Language and the body in interaction. [keynote]. Talk presented at the 22nd workshop on the Semantics and Pragmatics of Dialogue (SemDial). Aix-en-Provence, France. 2018-11-08 - 2018-11-10.
  • Schubotz, L., Ozyurek, A., & Holler, J. (2018). Age-related differences in multimodal recipient design. Poster presented at the 10th Dubrovnik Conference on Cognitive Science, Dubrovnik, Croatia.
  • Holler, J. (2017). Multimodal language use and comprehension in social interaction: The body is part of the package. [invited talk]. Talk presented at the Content and Reasoning in spoken dialogue and other media (CREDOG) workshop, Université Paris-Diderot. Paris, France. 2017-12-04 - 2017-12-05.
  • Holler, J. (2017). On the pragmatics of face-to-face communication: The role of the body in social cognition and social interaction. [invited talk]. Talk presented at the Centre for Linguistic Theory & Studies in Probability (CLASP), University of Gothenburg. Gothenborg, Sweden. 2017-05-29.
  • Holler, J. (2017). On the pragmatics of face-to-face communication: The role of the body in social cognition and social interaction. [invited talk]. Talk presented at the Institute of Cognitive Neuroscience, University College London. London, UK. 2017-06-12.
  • Holler, J. (2017). The role of the body in rendering conversation a cohesive activity. [invited talk]. Talk presented at the Workshop on ‘Connecting discourse in speech and gesture’, Humanities Lab, Lund University. Lund, Sweden. 2017-03-30 - 2017-03-31.
  • Hömke, P., Holler, J., & Levinson, S. C. (2017). Blinking as addressee feedback in face-to-face conversation. Talk presented at the MPI Proudly Presents series. Nijmegen, The Netherlands. 2017-06-29.
  • Hömke, P., Holler, J., & Levinson, S. C. (2017). Eye blinking as listener feedback in face-to-face communication. Talk presented at the 5th European Symposium on Multimodal Communication (MMSYM). Bielefeld, Germany. 2017-10-16 - 2017-10-17.

    Additional information

    Abstract
  • Holler, J. (2016). On the pragmatics of multi-modal communication: Gesture, speech and gaze in the coordination of mental states and social interaction. [invited talk]. Talk presented at Centre for Research on Social Interactions, University Neuchâtel. Neuchâtel, Switzerland. 2016-05-18.

    Abstract

    Coordination is at the heart of human conversation. In order to interact with one another through talk, we must coordinate at many levels, first and foremost at the level of our mental states, intentions and conversational contributions. In this talk, I will present findings on the pragmatics of multi-modal communication from both production and comprehension studies. In terms of production, I will talk about (1) how co-speech gestures are used in the coordination of meaning allowing interactants to arrive at a shared understanding of the things they talk about, as well as on (2) how gesture and gaze are employed in the coordination of speaking turns in conversation, with special reference to the psycholinguistic and cognitive challenges that turn-taking poses. In terms of comprehension, I will focus on the interplay of ostensive (social gaze) and semantic (gesture) signals in the context of intention perception and language processing. My talk will bring different sets of findings together to argue for richer research paradigms that capture more of the complexities and sociality of face-to-face conversational interaction. Advancing the field of multi-modal communication research in this direction will allow us to more fully understand the psycholinguistic processes that underlie human language use and language comprehension.
  • Holler, J. (2016). The role of the body in coordinating minds and utterances in interaction [invited talk]. Talk presented at the International Workshop on Language Production (IWLP 2016). La Jolla, CA, USA. 2016-07-25 - 2016-07-27.

    Abstract

    Human language has long been considered a unimodal activity, with the body being considered a mere vehicle for expressing acoustic linguistic meaning. But theories of language evolution point towards a close link between vocal and visual communication early on in history, pinpointing gesture as the origin of human language. Some consider this link between gesture and communicative vocalisations as having been temporary, with conventionalized linguistic code eventually replacing early bodily signaling. Others argue for this link being permanent, positing that even fully-fledged human language is a multi-modal phenomenon, with visual signals forming integral components of utterances in faceto- face conversation. My research provides evidence for the latter. Based on this research, I will provide insights into some of the factors and principles governing multimodal language use in adult interaction. My talk consists of three parts: First, I will present empirical findings showing that movements we produce with our body are indeed integral to spoken language and closely linked to communicative intentions underlying speaking. Second, I will show that bodily signals, first and foremost manual gestures, play an active role in the coordination of meaning during face-to-face interaction, including fundamental processes like the grounding of referential utterances. Third, I will present recent findings on the role of bodily communicative acts in the psycholinguistically challenging context of turn-taking during conversation. Together, the data I present form the basis of a framework aiming to capture multi-modal language use and processing situated in face-to-face interaction, the environment in which language first emerged, is acquired and used most.
  • Holler, J., & Kendrick, K. H. (2016). Turn-timing and the body: Gesture speeds up conversation. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.

    Abstract

    Conversation is the core niche of human multi-modal language use and it is characterized by a system of taking turns. This organization poses a particular psycholinguistic challenge for its participants: considering the gap between two speaking turns averages around just 200 ms (Stivers et al., 2009) but the production of single word utterances takes a minimum of 600 ms alone (Indefrey & Levelt, 2004), language production and comprehension must largely run in parallel; while listening to an on-going turn, a next speaker has to predict the upcoming content and end of that turn to start preparing their own and launch it on time (Levinson, 2013). Recently, research has begun to investigate the cognitive processes underpinning turn-taking (see Holler et al., 2015 for an overview), but this research has focused on the spoken modality. The present study investigates the role co-speech gestures may play in this process. We analysed a corpus of 7 casual face-to-face conversations between English speakers for all question-response sequences (N=281), the gestures that accompanied the identified set of questions, and the timing of these gestures with respect to the speaking turns they accompanied. Moreover, we measured the length of all inter-turn gaps in our set. Our main research question was whether the length of the gap between turns varied systematically as a consequence of questions being accompanied by gesture. Our results revealed that this is indeed the case: Questions with a gestural component were responded to significantly faster than questions without a gestural component. This finding holds when we consider head and hand gestures separately, when we control for points of possible completion in the verbal utterance prior to turn end, and when we control for complexity associated with question type. Furthermore, our findings revealed that within the group of questions accompanied by gestures, those questions whose gestures retracted prior to turn end were responded to faster than questions whose gestures retracted following turn end. This study provides evidence that gestures accompanying spoken questions in conversation facilitate the coordination of turns. While experimental studies have demonstrated beneficial effects of gestures on language processing, this is the first evidence that gestures may benefit processing even in the rich, cognitively challenging context of conversational interaction. That is, gestures appear to play an important psycholinguistic function during immersed, in situ language processing. Experimental work is currently exploring at which level (semantic, pragmatic, perceptual) the facilitative effects we found are operating. The findings not only suggest psycholinguistic processing benefits but also expand on previous turn-taking models that restrict the function of gesture to turn-yielding/-keeping cues (Duncan, 1972) as well as on turn-taking models focusing primarily on the verbal modality (Sacks et al., 1974).
  • Hömke, P., Holler, J., & Levinson, S. C. (2016). Blinking as addressee feedback in face-to-face conversation. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-22 - 2016-07-24.
  • Poliakoff, E., Humphries, S., Crawford, T., & Holler, J. (2016). Varying the degree of motion in actions influences gestural action depictions in Parkinson’s disease. Talk presented at the British Neuropsychological Society Autumn Meeting. London, UK. 2016-10-26 - 2016-10-27.

    Abstract

    In communication, speech is often accompanied by co-speech gestures, which embody a link between language and action. Language impairments in Parkinson’s disease (PD) are particularly pronounced for action-related words in comparison to nouns. People with PD produce fewer gestures from a first-person perspective when they describe others’ actions (Humphries et al., 2016), which may reflect a difficulty in simulation. We extended this to investigate the gestural depiction of other types of action information such as “manner” (how an action is performed) and “path” (the trajectory of a moving figure in space). We also explored whether the level of motion required to perform an action influences the way that people with PD use gestures to depict those actions. 37 people with PD and 35 age-matched controls viewed a cartoon which included low motion actions (e.g. hiding, knocking) and high motion actions (e.g. running, climbing), and described it to an addressee. We analysed the co-speech gestures they spontaneously produced while doing so. Overall gesture rate was similar in both groups, but people with PD produced action-gestures at a significantly lower rate than controls in both motion conditions. Also, people with PD produced significantly fewer manner and first-person action gestures than controls in the high motion condition (but not the low motion condition). Our findings suggest that motor impairments in PD contribute to the way in which actions, especially high motion actions, are depicted gesturally. Thus, people with Parkinson’s may have particular difficulty cognitively representing high motion actions
  • Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2016). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
  • Holler, J. (2015). Gesture, gaze and the body in the coordination of turns in conversation. [invited talk]. Talk presented at Contribution to the Symposium ‘How cognition supports social interaction: From joint action to dialogue’, 19th Conference of the European Society for Cognitive Psychology (ESCoP). Paphos, Cyprus. 2015-09-17 - 2015-09-20.

    Abstract

    Human language has long been considered a unimodal activity, with the body being considered a mere vehicle for expressing acoustic linguistic meaning. But theories of language evolution point towards a close link between vocal and visual communication early on in history, pinpointing gesture as the origin of human language. Some consider this link between gesture and communicative vocalisations as having been temporary, with conventionalized linguistic code eventually replacing early bodily signaling. Others argue for this link being permanent, positing that even fully-fledged human language is a multi-modal phenomenon, with visual signals forming integral components of utterances in faceto- face conversation. My research provides evidence for the latter. Based on this research, I will provide insights into some of the factors and principles governing multimodal language use in adult interaction. My talk consists of three parts: First, I will present empirical findings showing that movements we produce with our body are indeed integral to spoken language and closely linked to communicative intentions underlying speaking. Second, I will show that bodily signals, first and foremost manual gestures, play an active role in the coordination of meaning during face-to-face interaction, including fundamental processes like the grounding of referential utterances. Third, I will present recent findings on the role of bodily communicative acts in the psycholinguistically challenging context of turn-taking during conversation. Together, the data I present form the basis of a framework aiming to capture multi-modal language use and processing situated in face-to-face interaction, the environment in which language first emerged, is acquired and used most.
  • Holler, J., & Kendrick, K. H. (2015). Gesture, gaze, and the body in the organisation of turn-taking for conversation. Talk presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015). Paphos, Cyprus. 2015-09-17 - 2015-09-20.

    Abstract

    The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action. This most basic observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation? To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adults, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998). The corpus retains the spontaneity and naturalness of everyday talk as much as possible while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis. The analyses focus on the use and interplay of visual bodily resources, including eye gaze, co-speech gestures, and body posture, during conversational coordination, as well as on how these signals interweave with participants’ turns at talk. The results provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context, and into how concurrent visual and verbal resources are involved in turn construction and turn allocation. This project will add to our understanding of core issues in the field of CA, such as by elucidating the role of multi-modality and number of participants engaged in talk-in-interaction (Schegloff, 2009). References Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23, 283-92. Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press. Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22-63. Lerner, G. H. (2003). Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society, 32(02), 177–201. Mondada, L. (2007). Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies, 9, 195-226. Oloff, F. (2013). Embodied withdrawal after overlap resolution. Journal of Pragmatics, 46, 139-156. Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen. Sacks, H., & Schegloff, E. (2002). Home position. Gesture, 2, 133-146. Schegloff, E. (1998). Body torque. Social Research, 65, 535-596. Schegloff, E. (2009). One perspective on Conversation Analysis: Comparative perspectives. In J. Sidnell (ed.), Conversation Analysis: Comparative perspectives, pp. 357-406. Cambridge: Cambridge University Press.
  • Holler, J., & Kendrick, K. H. (2015). Gesture, gaze, and the body in the organisation of turn-taking for conversation. Poster presented at the 14th International Pragmatics Conference, Anwerp, Belguim.

    Abstract

    The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action. This most basic observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation? To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adults, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998). The corpus retains the spontaneity and naturalness of everyday talk as much as possible while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis. The analyses focus on the use and interplay of visual bodily resources, including eye gaze, co-speech gestures, and body posture, during conversational coordination, as well as on how these signals interweave with participants’ turns at talk. The results provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context, and into how concurrent visual and verbal resources are involved in turn construction and turn allocation. This project will add to our understanding of core issues in the field of CA, such as by elucidating the role of multi-modality and number of participants engaged in talk-in-interaction (Schegloff, 2009). References Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23, 283-92. Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press. Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22-63. Lerner, G. H. (2003). Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society, 32(02), 177–201. Mondada, L. (2007). Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies, 9, 195-226. Oloff, F. (2013). Embodied withdrawal after overlap resolution. Journal of Pragmatics, 46, 139-156. Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen. Sacks, H., & Schegloff, E. (2002). Home position. Gesture, 2, 133-146. Schegloff, E. (1998). Body torque. Social Research, 65, 535-596. Schegloff, E. (2009). One perspective on Conversation Analysis: Comparative perspectives. In J. Sidnell (ed.), Conversation Analysis: Comparative perspectives, pp. 357-406. Cambridge: Cambridge University Press.
  • Holler, J. (2015). On the pragmatics of multi-modal face-to-face communication: Gesture, speech and gaze in the coordination of mental states and social interaction. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-04.

    Abstract

    Coordination is at the heart of human conversation. In order to interact with one another through talk, we must coordinate at many levels, first and foremost at the level of our mental states, intentions and conversational contributions. In this talk, I will present findings on the pragmatics of multi-model communication from both production and comprehension studies. In terms of production, I will throw light on (1) how co-speech gestures are used in the coordination of meaning to allow interactants to arrive ate a shared understanding of the thins we talk about, as well as on (2) how gesture and gaze are employed in the coordination of speaking turns in spontaneous conversation, with special reference to the psycholinguistic and cognitive challenges that turn-taking poses. In terms of comprehension, I will focus on communicative intentions and the interplay of ostensive and semantic multi-model signals in triadic communication contexts. My talk will bring these different findings together to make the argument for richer reearch paradigms that capture more of the complexities and sociality of face-to-face conversational interactoin. Advancing the field of multi-model communication in this way will allow us to more fully understand the psycholinguistic processes that underlie human language use and language comprehension.
  • Holler, J. (2015). Visible communicative acts in the coordination of interaction. [invited talk]. Talk presented at Institute for Language Sciences, Cologne University. Cologne, Germany. 2015-06-11.
  • Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face conversation. Talk presented at the 6th Joint Action Meeting. Budapest, Hungary. 2015-07-01 - 2015-07-04.
  • Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Poster presented at the 19th Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2015 / goDIAL), Gothenburg, Sweden.
  • Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Talk presented at the Donders Discussions Conference. Nijmegen, The Netherlands. 2015-11-05.
  • Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Talk presented at the Nijmegen-Tilburg Multi-modality workshop. Tilburg, The Netherlands. 2015-10-22.
  • Humphries, S., Holler, J., Crawford, T., & Poliakoff, E. (2015). Investigating gesture viewpoint during action description in Parkinson’s Disease. Talk presented at the Research into Imagery and Observation Conference. Stirling, Scotland. 2015-05-14 - 2015-05-15.
  • Humphries, S., Holler, J., Crawfort, T., & Poliakoff, E. (2015). Investigating gesture viewpoint during action description in Parkinson’s Disease. Talk presented at the School of Psychological Sciences PGR Conference. Manchester, England.
  • Kendrick, K. H., & Holler, J. (2015). Triadic participation in question-answer sequences. Talk presented at Revisiting Participation – Language and Bodies in Interaction workshop. Basel, Switzerland. 2015-06-24 - 2015-06-27.
  • Kendrick, K. H., & Holler, J. (2015). Triadic participation in question-answer sequences. Talk presented at the 14th International Pragmatics Conference. Antwerp, Belgium. 2015-07-26 - 2015-07-31.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-05.
  • Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at Donders Sessions 2015, Nijmegen, The Netherlands.
  • Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Talk presented at Donders Discussions 2015. Nijmegen, The Netherlands. 2015-11-05.
  • Holler, J., & Kendrick, K. H. (2014). Gaze and the organization of turn-taking in triadic face-to-face interaction. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS 6). San Diego, CA, USA. 2014-07-08 - 2014-07-11.

    Abstract

    The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action (Clark, 1996). This observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation’o To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adult native English speakers, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998). The corpus retains much of the spontaneity and naturalness of everyday talk while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis. All data have been transcribed, coded for co-speech gestures and gaze fixations on a frame-by-frame basis. The large amount of data obtained from this corpus is currently being analysed both qualitatively and quantitatively. The project aims to shed light on the cognitive puzzle that turn-taking presents us with (Levinson, 2013); interlocutors are confronted with the challenge of comprehending an on-going turn while, at the same time, planning a response and estimating when the current speaker’s talk will end in order to time their contribution as precisely as possible (the average gap between turns is a mere 200ms). The results from this project provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context. Our findings so far show that co-speech gestures may play an important role in this process by guiding the projection of upcoming turn boundaries and next actions. In all, this project elucidates the role of multi-modality in the organisation of turns at talk and in the cognitive processes that underlie this organisation
  • Holler, J., & Kendrick, K. H. (2014). Gesture, gaze, and the body in the organisation of turn-taking for conversation: Insights from a corpus using new technologies. Talk presented at the 4th International Conference on Conversation Analysis (ICCA14). Los Angeles, CA, USA. 2014-06-25 - 2014-06-29.

    Abstract

    The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action. This most basic observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation? To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adults, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998). The corpus retains the spontaneity and naturalness of everyday talk as much as possible while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis. The analyses focus on the use and interplay of visual bodily resources, including eye gaze, co-speech gestures, and body posture, during conversational coordination, as well as on how these signals interweave with participants’ turns at talk. The results provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context, and into how concurrent visual and verbal resources are involved in turn construction and turn allocation. This project will add to our understanding of core issues in the field of CA, such as by elucidating the role of multi-modality and number of participants engaged in talk-in-interaction (Schegloff, 2009). References Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23, 283-92. Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press. Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22-63. Lerner, G. H. (2003). Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society, 32(02), 177–201. Mondada, L. (2007). Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies, 9, 195-226. Oloff, F. (2013). Embodied withdrawal after overlap resolution. Journal of Pragmatics, 46, 139-156. Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen. Sacks, H., & Schegloff, E. (2002). Home position. Gesture, 2, 133-146. Schegloff, E. (1998). Body torque. Social Research, 65, 535-596. Schegloff, E. (2009). One perspective on Conversation Analysis: Comparative perspectives. In J. Sidnell (ed.), Conversation Analysis: Comparative perspectives, pp. 357-406. Cambridge: Cambridge University Press.
  • Holler, J. (2014). How communicative intent influences adults’ co-speech gestures. Talk presented at the 4th Nijmegen Gesture Centre Workshop: Communicative intention in gesture and action. Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands. 2014-06-04 - 2014-06-05.
  • Holler, J. (2014). Social psycholinguistics: multi-modal language use and language comprehension in situ. Talk presented at the Multimodality in Interaction & Discourse workshop. University of Leuven, Belgium.
  • Humphries, S., Holler, J., Crawford, T., & Poliakoff, E. (2014). Representing actions in co-speech gestures in Parkinson's Disease. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.

    Abstract

    Parkinson’s disease (PD) is a progressive, neurological dis- order caused by the loss of dopaminergic cells in the basal ganglia, which is involved in motor control. This leads to the cardinal motor symptoms of PD: tremor, bradykinesia (slow- ness of movement), rigidity and postural instability. PD also leads to general cognitive impairment (executive function, memory, visuospatial abilities), and language impairments; PD patients perform worse at language tasks such as provid- ing word definitions and naming objects, generating lists of verbs, and naming actions. Thus, there seems to be a par- ticular impairment for action-language. Despite the fact that action and language are both impaired in PD, little research has explored if and how co-speech gestures, which embody a link between these two domains, are affected. The Ges- ture as Simulated Action hypothesis argues that gestures arise from cognitive representations or simulations of actions. It has been argued that people with PD may be less able to cog- nitively represent, simulate and imagine actions, which could account for their action-language impairment and may also mean that gestures are affected. Recently, it has been shown that while there is not a straightforward reduction in gesture use in PD, patients’ gestures which described actions are less precise/informative than those of controls. However, partici- pants only described two actions, and to a knowing addressee (so the task was not communicative). The present study extended this by asking participants to describe a wide range of actions in an apparently commu- nicative task, and compared viewpoint as well as precision between the two groups. Gesture viewpoint was examined in order to provide a window into the cognitive representa- tions underlying gesture, by demonstrating whether or not the speaker has placed themselves as the agent within the ac- tion (character viewpoint), requiring a cognitive simulation of the action. Overall, studying gestures in PD has clinical relevance, and will provide insight into the cognitive basis of gestures in healthy people. 25 PD patients and 25 age-matched controls viewed 10 pictures and 10 videos depicting a range of actions and de- scribed them to help an addressee identify the correct stimu- lus. No difference in the rate of gesture production between the two groups was found. However, the precision of ges- tures describing actions was found to be significantly lower in the PD group. Furthermore, the proportion of gestures produced from character viewpoint was found to differ be- tween the groups, with PD patients producing significantly less C-VPT gestures. This suggests that the cognitive repre- sentations underlying the gestures have changed in PD, and that people with PD are less able to imagine themselves as the agent of the action. This supports the GSA hypothesis by demonstrating that gesture production changes when the abil- ity to perform and to cognitively simulate actions is impaired. Our next study will assess the relationships between cognitive factors affected in PD and gesture, and motor imagery ability and gesture. The study will also examine gestures produced by people with PD when describing a wide range of semantic content in various communicative situations.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS 6). San Diego, CA, USA. 2014-07-08 - 2014-07-11.

    Abstract

    Background: Gesture and speech are theorized to form a single integrated system of meaning during language production (McNeill, 1992), and evidence is mounting that this integration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one button if the written prime was the same as the visual (31 subjects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming sequence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual target tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p<.001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying actions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and attentional processes). However, there were no significant differences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and extends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in relation to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th International Society for Gesture Studies Congress. San Diego, California, USA. 2014-07-08 - 2014-07-11.

    Abstract

    Background: Gesture and speech are theorized to form a sin- gle integrated system of meaning during language produc- tion (McNeill, 1992), and evidence is mounting that this in- tegration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multi- modal videos that presented auditory (words) and visual (ges- tures and actions on objects) information. Half the subjects related the audio information to a written prime presented be- fore the video, and the other half related the visual informa- tion to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one but- ton if the written prime was the same as the visual (31 sub- jects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming se- quence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual tar- get tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p < .001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying ac- tions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and atten- tional processes). However, there were no significant differ- ences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and ex- tends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in rela- tion to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension.
  • Kendrick, K. H., & Holler, J. (2014). Triadic participation in question-answer sequences. Talk presented at the Anéla Study Group for Discourse Analysis (AWIA) Symposium. Universiteit Utrecht, The Netherlands. 2014-10-02 - 2014-10-02.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language [SNL2014], Amsterdam, the Netherlands.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS6). San Diego, Cal. 2014-07-08 - 2014-07-11.
  • Rowbotham, S., Holler, J., Wearden, A., & Lloyd, D. (2014). I see how you feel: Speakers’ gestures help people to understand their pain. Talk presented at the 6th International Society for Gesture Studies Congress. San Diego, California, USA. 2014-07-08 - 2014-07-11.

    Abstract

    Pain is a frequent feature of medical consultations and must be communicated effectively if health care providers are to understand the experience and provide treatment. However, pain is difficult to verbalise and spoken pain descriptions are subject to misinterpretation (Schott, 2004). It is well known that when we speak we also produce co-speech hand ges- tures, and during pain communication these gestures have been found to depict aspects of pain that are not contained in the accompanying speech, such as location, sensation and cause of pain (Rowbotham et al., 2012, 2013a, 2013b). Al- though recipients are known to be able to comprehend the information contained in gestures produced during descrip- tions of concrete entities and events (see Hostetter, 2011 for a review), it is not yet known whether this is the case for sub- jective experiences such as pain. We investigated whether un- trained observers are able to glean any additional information from the gestures that accompany spoken pain descriptions, and whether this can be enhanced through a short instruction session on co-speech gestures. Participants (n = 30 per condi- tion) viewed 20 short video clips (mean length = 7.5 seconds) of pain descriptions under one of three presentation condi- tions: 1) Speech Only, 2) Speech and Gesture, or 3) Speech and Gesture plus Instruction (a short presentation, prior to the video clips, explaining what co-speech gestures are and the types of pain information they can depict). Following each clip, participants provided a free-text description of the pain and a “traceable additions” analysis (Kelly et al., 2002) was used to assess whether participants’ descriptions contained any information that was uniquely contained in gestures in the original clips. Participants who had received instruction in co-speech gestures (Speech and Gesture plus Instruction con- dition) obtained the most information from gestures, while those who did not have access to gestures (Speech Only con- dition) obtained the least. There were no differences in the amount of information obtained from speech across the con- ditions, suggesting that neither having access to gestures nor being instructed to attend to these has any detrimental effect on pain understanding. These results suggest that attending to the speaker’s gestures during pain communication can en- hance the recipients understanding of this subjective experi- ence. These findings have important implications for com- munication in medical settings, suggesting that health care professionals may benefit from training in co-speech gestures in order to improve their understanding of patients’ pain ex- periences
  • Schubotz, L., Holler, J., & Ozyurek, A. (2014). The impact of age and mutually shared knowledge on multi-modal utterance design. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.

    Abstract

    Previous work suggests that the communicative behavior of older adults differs systematically from that of younger adults. For instance, older adults produce significantly fewer representational gestures than younger adults in monologue description tasks (Cohen & Borsoi, 1996; Feyereisen & Havard, 1999). In addition, older adults seem to have more difficulty than younger adults in establishing common ground (i.e. knowledge, assumptions, and beliefs mutually shared between a speaker and an addressee, Clark, 1996) in speech in a referential communication paradigm (Horton & Spieler, 2007). Here we investigated whether older adults take such common ground into account when designing multi-modal utterances for an addressee. The present experiment com- pared the speech and co-speech gesture production of two age groups (young: 20-30 years, old: 65-75 years) in an inter- active setting, manipulating the amount of common ground between participants. Thirty-two pairs of nave participants (16 young, 16 old, same-age-pairs only) took part in the experiment. One of the participants (the speaker) narrated short cartoon stories to the other participant (the addressee) (task 1) and gave instruc- tions on how to assemble a 3D model from wooden building blocks (task 2). In both tasks, we varied the amount of infor- mation mutually shared between the two participants (com- mon ground manipulation). Additionally, we also obtained a range of cognitive measures from the speaker: verbal work- ing memory (operation span task), visual working memory (visual patterns test and Corsi block test), processing speed and executive functioning (trail making test parts A + B) and a semantic fluency measure (animal naming task). Prelimi- nary data analysis of about half the final sample suggests that overall, speakers use fewer words per narration/instruction when there is shared knowledge with the addressee, in line with previous findings (e.g. Clark & Wilkes-Gibbs, 1986). This effect is larger for young than for old adults, potentially indicating that older adults have more difficulties taking com- mon ground into account when formulating utterances. Fur- ther, representational co-speech gestures were produced at the same rate by both age groups regardless of common ground condition in the narration task (in line with Campisi & zyrek, 2013). In the building block task, however, the trend for the young adults is to gesture at a higher rate in the common ground condition, suggesting that they rely more on the vi- sual modality here (cf. Holler & Wilkin, 2009). The same trend could not be found for the old adults. Within the next three months, we will extend our analysis a) by taking a wider range of gesture types (interactive gestures, beats) into ac- count and b) by looking at qualitative features of speech (in- formation content) and co-speech gestures (size, shape, tim- ing). Finally, we will correlate the resulting data with the data from the cognitive tests. This study will contribute to a better understanding of the communicative strategies of a growing aging population as well as to the body of research on co-speech gesture use in addressee design. It also addresses the relationship between cognitive abilities on the one hand and co-speech gesture production on the other hand, potentially informing existing models of co-speech gesture production.
  • Tutton, M., & Holler, J. (2014). Gesturing when common ground exists: Is gesture rate determined by cognitive load or communicative context?. Talk presented at the 5th UK Cognitive Linguistics Conference. Lancaster, UK. 2014-07-29 - 2014-07-31.

    Abstract

    Common ground (CG), i.e. the knowledge, beliefs and assumptions that interlocutors mutually share in interaction, is fundamental t o successful communication (Clark, 1996). An increasing number of studies have shown that speakers use co - speech gestures at the same rate (or even higher) when they share CG as opposed to when they do not (e.g. Campisi & Ozyurek, 2013; Holler & Wilkin, 20 09; Holler, Tutton & Wilkin, 2011). There are two alternative explanations for this finding. On the one hand, it has been argued that mentally representing our addressee’s knowledge can require considerable cognitive effort (Pickering & Garrod, 2004). Henc e, gesture rate may be high in CG contexts because the cognitive effort involved in mentally representing CG is considerable. In contrast, this high gesture rate may be due to the fact that gestures play an important communicative role, even when conveying information that is already mutually shared (Holler & Wilkin, 2009). The present study tested these two hypotheses by combining the manipulation of CG with a manipulation of communicative context. We used a 2(CG) x 3(communication context) between - partici pants design (18 participants per condition, N=108). All participants watched a short film and narrated it to their addressee. Addressees had either seen parts of the film together with the speaker (CG) or not (no - CG). In addition, we manipulated communica tive context by asking speakers to narrate their story either face - to - face, via an occluding screen, or into a tape - recorder, a manipulation that has been shown to affect gesture rate in no - CG contexts (Bavelas et al., 2008). Our results revealed a signifi cant main effect of communicative context, with gesture rate being highest in the face - to - face condition, followed by the screen condition, and lowest in the tape - recorder condition. Importantly, we did not find a main effect of common ground on gesture ra te, and no interaction between our two factors. This finding supports the hypothesis that gestures representing CG information are communicatively intended as opposed to being triggered by an increased cognitive load.
  • Tutton, M., & Holler, J. (2014). Visualising common ground: for communication or cognition?. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS 6). San Diego, CA, USA. 2014-07-08 - 2014-07-11.

    Abstract

    Common ground (CG), i.e., the knowledge, beliefs and assumptions interlocutors mutually share in interaction, is fundamental to successful communication (Clark, 1996). Next to studies finding gestural ellipsis in the context of CG, an increasing number of studies has shown that speakers use co-speech gestures at the same rate (or even a higher one) when they do compared to when they do not share CG with their interlocutor (e.g. Campisi & Ozyurek, 2013; Holler & Wilkin, 2009; Holler, Tutton & Wilkin, 2011). Common ground (CG), i.e. the knowledge, beliefs and assumptions that interlocutors mutually share in interaction, is fundamental to successful communication (Clark, 1996). In contrast to studies that have found gestural ellipsis when a speaker shares CG with an interlocutor, an increasing number of studies have shown that speakers use co-speech gestures at the same rate (or even higher) when they share CG as opposed to when they do not (e.g. Campisi & Ozyurek, 2013; Holler & Wilkin, 2009; Holler, Tutton & Wilkin, 2011). There are two alternative explanations for this finding. On the one hand, it has been argued that mentally representing our addressee’s knowledge can require considerable cognitive effort (Pickering & Garrod, 2004). In combination with evidence that gesturing helps to reduce cognitive load in cognitively effortful tasks (e.g., Goldin-Meadow, 1999), one hypothesis is that gesture rate is high in CG contexts because cognitive effort involved in mentally representing CG is high. This contrasts markedly with the hypothesis that gesture rate remains high when CG exists because the gestures play an important communicative role even when they are conveying information that is mutually shared (Holler & Wilkin, 2009). There are two alternative explanations for this finding. On the one hand, it has been argued that mentally representing our addressee’s knowledge can require considerable cognitive effort (Pickering & Garrod, 2004). In combination with evidence that gesturing helps to reduce cognitive load in cognitively effortful tasks (e.g. Goldin-Meadow, 1999), one hypothesis is that gesture rate is high in CG contexts because the cognitive effort involved in mentally representing CG is high. This contrasts markedly with the hypothesis that gesture rate remains high when CG exists because gestures play an important communicative role, even when conveying information that is already mutually shared (Holler & Wilkin, 2009). The present study tested these two hypotheses by combining the manipulation of CG with a manipulation of communicative context. We used a 2(CG) x 3(communication context) between-participants design (18 participants per condition, N=108). All participants watched a short film and narrated it to their addressee. Addressees had either seen parts of the film together with the speaker (CG) or not (no- CG). In addition, we manipulated communication context by asking speakers to narrate their story either face-to-face, via an occluding screen, or into a tape-recorder, a manipulation that has been shown to affect gesture rate in no-CG contexts (Bavelas et al., 2008). If gestures produced in CG contexts are triggered by the cognitive effort of having to mentally represent CG, then social manipulations of this kind should not influence gesture rate in. If gestures conveying information already in CG are communicatively intended, however, then we would expect gesture rate to be different in the three conditions. Our results revealed a significant main effect of social context, with gesture rate being highest in the face-toface condition, followed by the screen condition, and lowest in the tape-recorder condition. Importantly, we did not find a main effect of common ground on gesture rate, and no interaction between our two factors.The present study tested these two hypotheses by combining the manipulation of CG with a manipulation of communicative context. We used a 2(CG) x 3 (communication context) between-participants design (18 participants per condition, N=108). All participants watched a short film and narrated it to their addressee. Addressees had either seen parts of the film together with the speaker (CG) or not (no-CG). In addition, we manipulated communicative context by asking speakers to narrate their story either faceto- face, via an occluding screen, or into a tape-recorder, a manipulation that has been shown to affect gesture rate in no- CG contexts (Bavelas et al., 2008). If gestures produced in CG contexts are triggered by the cognitive effort of having to mentally represent CG, then social manipulations of this kind should not influence gesture rate. However, if gestures conveying information already in CG are communicatively intended, then we would expect gesture rate to be different in the three conditions. Our results provide several insights. Firstly, they add to the growing body of evidence for maintained/high gesture rate in some common ground contexts. Secondly, they replicate effects of visual access and dialogue on gesture rate found in earlier studies manipulating social interaction. Thirdly, and most importantly, this social interaction effect affected gesture rates in both the common ground and no-common ground conditions equally. This finding is compatible with the account that gestures representing CG information are communicatively intended but not with a cognitive effort-based explanation. Our results revealed a significant main effect of communicative context, with gesture rate being highest in the face-to-face condition, followed by the screen condition, and lowest in the tape-recorder condition. Importantly, we did not find a main effect of common ground on gesture rate, and no interaction between our two factors
  • Wilby, F., Riddell, C., Lloyd, D., Wearden, A., & Holler, J. (2014). Naming with words and gestures in children with Down Syndrome. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.

    Abstract

    Several researchers have shown a close relationship between gesture and language in typically developing children and in children with developmental disorders involving delayed or impaired linguistic abilities. Most of these studies reported that, when children are limited in cognitive, linguistic, met- alinguistic, and articulatory skills, they may compensate for some of these limitations with gestures (Capone & McGre- gor, 2004). Some researchers also highlighted that children with Down Syndrome (DS) show a preference for nonver- bal communication using more gestures with respect to typi- cally developing (TD) children (Stefanini, Caselli & Volterra, 2011). The present study investigates the lexical comprehen- sion and production abilities as well as the frequency and the form of gestural production in children with DS. In partic- ular, we are interested in the frequency of gesture produc- tion (deictic and representational) and the types of represen- tational gesture produced. Four gesture types were coded, including own body, size and shape, body-part-as object and imagined-object. Fourteen children with DS (34 months of developmental age, 54 months of chronological age) and a comparison group of 14 typically developing children (TD) (29 months of chronological age) matched for gender and de- velopmental age were assessed through the parent question- naire MB-CDI and a direct test of lexical comprehension and production (PiNG). Children with DS show a general weak- ness in lexical comprehension and production. As for the composition of the lexical repertoire, for both groups of chil- dren, nouns are understood and produced in higher percent- ages compared to predicates. Children with DS produce more representational gestures than TD children in the comprehen- sion task and above all with predicates; on the contrary, both groups of children exhibit the same number of gestures on the MB-CDI and in the lexical production task. Children with DS produced more unimodal gestural answers than the con- trol group. Children from both groups produced all four ges- ture types (own body 53%, size and shape 9%, body-part-as object 25 %, and imagined-object 14%). Chi-square analy- sis revealed no significant difference in the type of gesture produced between the two groups of children for both lex- ical categories. For both groups the distribution of gesture types reflects an item effect (eg. 100% of gesture produced for the pictures lion, kissing and washing were own body and 100% of the pictures produce for small and long were size and shape). For some item (e.g. comb, talking on the phone) chil- dren in both groups produced both types (body-part-as object and imagined-object) with similar frequency. These data on the types of representational gestures produced by the two groups show a similar conceptual representation in TD chil- dren and in children with DS despite a greater impairment of the spoken linguistic abilities in the letter. Future investiga- tions, are needed to confirm these preliminary results.
  • Holler, J. (2013). Gesture use in social context: The influence of common ground on co-speech gesture production in dyadic interaction. Talk presented at the Humanities Lab, Lund University. Lund, Sweden.
  • Holler, J. (2013). Gesture use in social context: The influence of common ground on co-speech gesture production in dyadic interaction. Talk presented at Laboratoire Parole et Langage. Université Aix-Marseille. Aix-en-Provence, France.
  • Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. Poster presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013), Berlin, Germany.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., & Ozyurek, A. (2013). Multi-modal language comprehension as a joint activity: The influence of eye gaze on the processing of speech and co-speech gesture in multi-party communication. Talk presented at the 5th Joint Action Meeting. Berlin. 2013-07-26 - 2013-07-29.

    Abstract

    Traditionally, language comprehension has been studied as a solitary and unimodal activity. Here, we investigate language comprehension as a joint activity, i.e., in a dynamic social context involving multiple participants in different roles with different perspectives, while taking into account the multimodal nature of facetoface communication. We simulated a triadic communication context involving a speaker alternating her gaze between two different recipients, conveying information not only via speech but gesture as well. Participants thus viewed videorecorded speechonly or speech+gesture utterances referencing objects (e.g., “he likes the laptop”/+TYPING ON LAPTOPgesture) when being addressed (direct gaze) or unaddressed (averted gaze). The videoclips were followed by two object images (laptoptowel). Participants’ task was to choose the object that matched the speaker’s message (i.e., laptop). Unaddressed recipients responded significantly slower than addressees for speechonly utterances. However, perceiving the same speech accompanied by gestures sped them up to levels identical to that of addressees. Thus, when speech processing suffers due to being unaddressed, gestures become more prominent and boost comprehension of a speaker’s spoken message. Our findings illuminate how participants process multimodal language and how this process is influenced by eye gaze, an important social cue facilitating coordination in the joint activity of conversation.
  • Holler, J., Kelly, S., Hagoort, P., Schubotz, L., & Ozyurek, A. (2013). Speakers' social eye gaze modulates addressed and unaddressed recipients' comprehension of gesture and speech in multi-party communication. Talk presented at the 5th Biennial Conference of Experimental Pragmatics (XPRAG 2013). Utrecht, The Netherlands. 2013-09-04 - 2013-09-06.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the form of pointing gestures. Talk presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013). Berlin, Germany. 2013-08-01 - 2013-08-03.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). The influence of communicative intent on the form of pointing gestures. Poster presented at the Fifth Joint Action Meeting (JAM5), Berlin, Germany.
  • Tutton, M., & Holler, J. (2013). How degree of verbal interaction affects the communication of static locative information. Talk presented at the 5th International Conference of the Association Française de Linguistique Cognitive: Empirical Approaches to Multi-modality and to Language Variation (AFLiCo 5). Lille, France. 2013-05-15 - 2013-05-17.
  • Cotroneo, C., Holler, J., & Connell, L. (2012). Gesture and the embodiment of auditory perceptual information. Poster presented at the 5th Embodied and Situated Language Processing Conference (ESLP 2012), Newcastle upon Tyne, UK.
  • Herrera, E., Poliakoff, E., Holler, J., McDonald, K., & Cuetos, F. (2012). Naming dynamic actions in Parkinson's disease. Poster presented at the 16th International Congress of Parkinson's Disease and Movement Disorders, Dubline, Ireland.
  • Holler, J. (2012). Contextualising gesture: Experimental studies of social processes in gesture production and comprehension. Talk presented at the Department of Psychology, University of Sheffield. Sheffield, UK. 2012-04.
  • Holler, J. (2012). Gesture use in social context. Talk presented at the Tilburg Centre for Cognition and Communication, Tilburg University. Tilburg, The Netherlands. 2012-02.
  • Holler, J. (2012). Gesture use in social context: The influence of common ground on gesture use in dyadic interaction. Talk presented at the Cologne-Aachen Gesture Colloquium Series, University of Cologne. Cologne, Germany. 2012-01.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the EPS workshop 'What if.. the study of language started from the investigation of signed, rather than spoken language?, London, UK.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). The influence of gaze direction on the comprehension of speech and gesture in triadic communication. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012). Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    Human face-to-face communication is a multi-modal activity. Recent research has shown that, during comprehension, recipients integrate information from speech with that contained in co-speech gestures (e.g., Kelly et al., 2010). The current studies take this research one step further by investigating the influence of another modality, namely eye gaze, on speech and gesture comprehension, to advance our understanding of language processing in more situated contexts. In spite of the large body of literature on processing of eye gaze, very few studies have investigated its processing in the context of communication (but see, e.g., Staudte & Crocker, 2011 for an exception). In two studies we simulated a triadic communication context in which a speaker alternated their gaze between our participant and another (alleged) participant. Participants thus viewed speech-only or speech + gesture utterances either in the role of addressee (direct gaze) or in the role of unaddressed recipient (averted gaze). In Study 1, participants (N = 32) viewed video-clips of a speaker producing speech-only (e.g. “she trained the horse”) or speech+gesture utterances conveying complementary information (e.g. “she trained the horse”+WHIPPING gesture). Participants were asked to judge whether a word displayed on screen after each video-clip matched what the speaker said or not. In half of the cases, the word matched a previously uttered word, requiring a “yes” answer. In all other cases, the word matched the meaning of the gesture the actor had performed, thus requiring a ‘no’ answer.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012). Sapporo, Japan. 2012-08-01 - 2012-08-04.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 5th Conference of the International Society for Gesture Studies (ISGS 5). Lund, Sweden. 2012-07-24 - 2012-07-27.
  • Humphreys, S., Poliakoff, E., & Holler, J. (2012). Action representation in co-speech gestures in Parkinson's Disease. Poster presented at the Parkinson's UK Research Conference, York, UK.
  • Humphries, S., Poliakoff, E., & Holler, J. (2012). Action representation actions in co-speech gestures in Parkinson's Disease. Poster presented at the Parkinson’s UK Research Conference, York.
  • Humphries, S., Poliakoff, E., & Holler, J. (2012). How does Parkinson’s Disease affect the way people use gestures to communicate about actions?. Poster presented at Parkinson’s UK Research Conference, York.

    Abstract

    Objective: To examine how co-speech gestures depicting actions are a ected in Parkinson’s disease (PD), and to explore how gestures might be related to measures of verbal fluency and action naming. Background: PD a ects not only motor abilities, but also language and communication. Language is more impaired for words relating to motor content; e.g., patients take longer to name actions with a high compared to a low motor content. Co-speech gestures embody a form of action which is tightly linked to language and which represent meaningful information that forms a unified whole together with that contained in speech. However, co-speech gestures have rarely been investigated in PD. Recent data showed that gestural precision was reduced in PD patients when describing actions, suggesting that the mental representations of actions underlying their co-speech gestures have become less specific. We investigated this phenomenon for a wider range of actions than the original study, and also explored the possible relationship between verbal fluency/naming deficits and gestures. Method: Sixteen PD patients and 13 IQ-matched healthy controls were video recorded describing pictures and video clips of actions, such as running and knitting. Participants also completed measures of verbal fluency (generating as many words as possible in one minute for certain phonological and semantic categories) and action naming. Results: Analysis is in progress. We are comparing the rate of co-speech gesture production as well as the precision of action-related co-speech gestures between PD patients and controls. We will also examine the relationship between gestures and scores on tasks of verbal fluency and action naming. Conclusions: Investigating co-speech gestures associated with actions has implications for understanding both communication and action representation in Parkinson’s.
  • Kelly, S., Ozyurek, A., Healey, M., & Holler, J. (2012). The communicative influence of gesture and action during speech comprehension: Gestures have the upper hand. Talk presented at the Acoustics 2012 Hong Kong Conference and Exhibition. Hong Kong. 2012-05-13 - 2012-05-18.
  • Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: Speakers' gaze direction modulates the integration of speech and iconic gestures in the rigth MTG. Poster presented at the 4th Annual Neurobiology of Language Conference (NLC 2012), San Sebastian, Spain.
  • Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: The role of the Middle Temporal Gyrus in the integration of gaze, gesture and speech. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
  • Rowbotham, S., Wearden, A., Holler, J., & Lloyd, D. (2012). Investigating the association between pain catastrophising and co-speech gesture production during pain communication. Talk presented at the 8th Annual Scientific Meeting of the UK Society for Behavioural Medicine. Manchester, UK. 2012-12-10 - 2012-12-11.
  • Rowbotham, S., Wearden, A., Holler, J., & Lloyd, D. (2012). The relationship between pain catastrophizing and gesture production during pain communication. Poster presented at the British Psychological Society Division of Health Psychology Section Annual Conference, Liverpool, UK.
  • Rowbotham, S., Holler, J., Wearden, A., & Lloyd, D. (2012). The semantic interplay of speech and co-speech gestures in the description of pain sensations. Talk presented at the 5th Conference of the International Society for Gesture Studies (ISGS 5). Lund, Sweden. 2012-07-24 - 2012-07-27.
  • Theakston, A., & Holler, J. (2012). The effect of co-speech gesture on children's comprehension and production of complex syntactic constructions. Talk presented at the 4th UK Cognitive Linguistics Conference. London, UK. 2012-07-10 - 2012-07-12.
  • Tutton, M., & Holler, J. (2012). The influence of verbal interaction on speaker's gestural communication of mutually shared knowledge. Talk presented at the 5th Conference of the International Society for Gesture Studies (ISGS 5). Lund, Sweden. 2012-07-24 - 2012-07-27.

Share this page