Displaying 1 - 100 of 127
-
Dona, L., Özyürek, A., Holler, J., Woensdregt, M., & Raviv, L. (2024). The role of facial expressions signalling confidence or doubt in language emergence. Poster presented at the IMPRS Conference 2024, Nijmegen, the Netherlands.
-
Dona, L., Özyürek, A., Holler, J., Woensdregt, M., & Raviv, L. (2024). Communicating confidence and doubt through the face: Implications for language emergence. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Emmendorfer, A. K., & Holler, J. (2024). The influence of speaker gaze on addressee response planning: evidence from EEG data. Poster presented at the IMPRS Conference 2024, Nijmegen, the Netherlands.
-
Emmendorfer, A. K., & Holler, J. (2024). Addressee facial signals indicate upcoming response: Evidence from an online VR experiment. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Ghaleb, E., Rasenberg, M., Pouw, W., Toni, I., Holler, J., Özyürek, A., & Fernandez, R. (2024). Analysing cross-speaker convergence through the lens of automatically detected shared linguistic constructions. Poster presented at the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024), Rotterdam, The Netherlands.
-
Ghaleb, E., Burenko, I., Rasenberg, M., Pouw, W., Uhrig, P., Wilson, A., Toni, I., Holler, J., Özyürek, A., & Fernández, R. (2024). Temporal alignment and integration of audio-visual cues for co-speech gesture detection. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2024). Inter-brain synchrony during (un)-successful face-to face communication. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Co-speech hand gestures are used to predict upcoming meaning. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses by improving predictions of upcoming meaning. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Dockendorff, M., Holler, J., & Knoblich, G. (2023). Saying things with actions — or how instrumental actions can take on a communicative function. Talk presented at the 9th bi-annual Joint Action Meeting (JAM). Budapest, Hungary. 2023-07-10 - 2023-07-12.
-
Emmendorfer, A. K., Banovac, L., Gorter, A., & Holler, J. (2023). Visual signals as response mobilization cues in face-to-face conversation. Talk presented at the 8th Gesture and Speech in Interaction (GESPIN 2023). Nijmegen, The Netherlands. 2023-09-13 - 2023-09-15.
-
Emmendorfer, A. K., & Holler, J. (2023). Addressee gaze direction and response timing signal upcoming response preference: Evidence from behavioral and EEG experiments. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Emmendorfer, A. K., & Holler, J. (2023). The influence of speaker gaze on addressees’ response planning: Evidence from behavioral and EEG data. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Holler, J. (2023). Multimodal addressee responses as tools for coordination and adaptation in conversational interaction. Talk presented at the 9th bi-annual Joint Action Meeting (JAM). Budapest, Hungary. 2023-07-10 - 2023-07-12.
-
Holler, J. (2023). Human language processing as a multimodal, situated activity. Talk presented at the 21st International Multisensory Research Forum (IRMF 2023). Brussels, Belgium. 2023-06-23 - 2023-06-30.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Investigating inter-brain synchrony during (un-)successful face-to-face communication. Poster presented at the 9th bi-annual Joint Action Meeting (JAM), Budapest, Hungary.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Studying the association between co-speech gestures, mutual understanding and inter-brain synchrony in face-to-face conversations. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
Abstract
Human communication requires interlocutors to mutually understand each other. Previous research has suggested inter-brain synchrony as an important feature of social interaction, since it has been observed during joint attention, speech interactions and cooperative tasks. Nonetheless, it is still unknown whether inter-brain synchrony is actually related to successful face-to-face communication. Here, we use dual-EEG to study if inter-brain synchrony is modulated during episodes of successful and unsuccessful communication in clear and noisy communication settings. Dyads performed a tangram-based referential communication task with and without background noise, while both their EEG and audiovisual behavior was recorded. Other-initiated repairs were annotated in the audiovisual data and were used as indexes of unsuccessful and successful communication. More specifically, we compared inter-brain synchrony during episodes of miscommunication (repair initiations) and episodes of mutual understanding (repair solutions and acceptance phases) in the clear and the noise condition. We expect that when communication is successful, inter-brain synchrony will be stronger than when communication is unsuccessful, and we expect that these patterns will be most pronounced in the noise condition. Results are currently being analyzed and will be presented and discussed with respect to the inter-brain neural signatures underlying the process of mutual understanding in face-to-face conversation. -
Ter Bekke, M., Holler, J., & Drijvers, L. (2023). Do listeners use speakers’ iconic hand gestures to predict upcoming words?. Talk presented at the 9th bi-annual Joint Action Meeting (JAM). Budapest, Hungary. 2023-07-10 - 2023-07-12.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Do listeners use speakers’ iconic gestures to predict upcoming words?. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Gestures speed up responses to questions. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Do listeners use speakers’ iconic hand gestures to predict upcoming words?. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Trujillo, J. P., & Holler, J. (2023). Investigating the multimodal compositionality and comprehension of intended meanings using virtual agents. Talk presented at the 9th bi-annual Joint Action Meeting (JAM). Budapest, Hungary. 2023-07-10 - 2023-07-12.
-
Trujillo, J. P., Dyer, R. M. K., & Holler, J. (2023). Differences in partner empathy are associated with interpersonal kinetic and prosodic entrainment during conversation. Poster presented at the 9th bi-annual Joint Action Meeting (JAM), Budapest, Hungary.
-
Drijvers, L., & Holler, J. (2022). Spatial orientation influences cognitive processing in conversation. Talk presented at the 18th NVP Winter Conference on Brain and Cognition. Egmond aan Zee, The Netherlands. 2022-04-28 - 2022-04-30.
-
Drijvers, L., & Holler, J. (2022). Face-to-face spatial orientation fine-tunes the brain for neurocognitive processing in conversation. Poster presented at the 14th Annual Meeting of the Society for the Neurobiology of Language (SNL 2022), Philadelphia, PA, USA.
-
Emmendorfer, A. K., Gorter, A., & Holler, J. (2022). Interactive gestures as response mobilizing cues? Evidence from corpus, behavioral, and EEG data. Poster presented at the 14th Annual Meeting of the Society for the Neurobiology of Language (SNL 2022), Philadelphia, PA, USA.
-
Emmendorfer, A. K., Banovac, L., & Holler, J. (2022). Investigating the role of speaker gaze in response mobilization: Evidence from corpus, behavioral, and EEG data. Poster presented at the 14th Annual Meeting of the Society for the Neurobiology of Language (SNL 2022), Philadelphia, PA, USA.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2022). Intra- and inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2022). Intra- and inter-brain synchrony during (un)successful face-to-face communication. Poster presented at Neurobiology of Language: Key Issues and Ways Forward II, online.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2022). Intra- and inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 14th Annual Meeting of the Society for the Neurobiology of Language (SNL 2022), Philadelphia, PA, USA.
-
Nota, N., Trujillo, J. P., & Holler, J. (2022). Facial signals in multimodal communication: The effect of eyebrow movements on social action attribution. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
-
Nota, N., Trujillo, J. P., & Holler, J. (2022). Conversational eyebrow frowns facilitate question identification: An online VR study. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
-
Nota, N., Trujillo, J. P., & Holler, J. (2022). Conversational eyebrow frowns facilitate question identification: An online VR study. Poster presented at the IMPRS Conference 2022, Nijmegen, the Netherlands.
-
Nota, N., Trujillo, J. P., & Holler, J. (2022). Specific facial signals associate with subcategories of social actions conveyed through questions. Poster presented at the Face2face: Advancing the science of social interaction Royal Society London discussion meeting, online.
-
Nota, N., Trujillo, J. P., & Holler, J. (2022). Specific facial signals associate with subcategories of social actions conveyed through questions. Poster presented at the Donders Poster Sessions 2022, Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2022). Hand gestures speed up responses to questions. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
-
Trujillo, J. P., Levinson, S. C., & Holler, J. (2022). Multimodal adaptation is a two-way street: A multiscale investigation of the human communication system's response to visual disruption. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
-
Trujillo, J. P., Levinson, S. C., & Holler, J. (2022). Multimodal adaptation is a two-way street: A multiscale investigation of the human communication system's response to visual disruption. Poster presented at the Face2face: Advancing the science of social interaction Royal Society London discussion meeting, online.
-
Trujillo, J. P., & Holler, J. (2022). The bodily kinematics of signaling conversational social action. Poster presented at the 9th International Society for Gesture Studies conference (ISGS 2022), Chicago, IL, USA.
-
Nota, N., Trujillo, J. P., & Holler, J. (2021). Facial signals and social actions in multimodal face-to-face interaction. Poster presented at the 4th Experimental Pragmatics in Italy Conference (XPRAG.it 2020(21) ), online.
-
Trujillo, J. P., & Holler, J. (2021). Questions and responses in motion: Torso movements provide early signals of what interlocutors do in conversation. Talk presented at the 4th Experimental Pragmatics in Italy Conference (XPRAG.it 2020(21) ). online. 2021-07-08 - 2021-07-09.
-
Trujillo, J. P., Levinson, S. C., & Holler, J. (2021). Visual information in computer-mediated interaction matters: Investigating the association between the availability of gesture and turn transition timing in conversation. Talk presented at the 23rd International Conference on Human-Computer Interaction (HCII 2021 ). online. 2021-07-24 - 2021-07-29.
-
Trujillo, J. P., Levinson, S. C., & Holler, J. (2021). Visual information in computer-mediated interaction matters: Investigating the association between the availability of gesture and turn transition timing in conversation. Talk presented at ESLP 2021 (Embodied & Situated Language Processing). online. 2021-09-20 - 2021-09-29.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2020). The predictive potential of hand gestures during conversation: An investigation of the timing of gestures in relation to speech. Talk presented at the 7th Gesture and Speech Interaction (GESPIN 2020). online. 2020-09-07 - 2020-09-09.
-
Holler, J. (2019). Multimodal signalling for coordination in conversational interaction [keynote]. Talk presented at 7th conference of the Scandinavian Association for Language and Cognition (SALC7). Aarhus, Denmark. 2019-05-22 - 2019-05-24.
-
Holler, J. (2019). Timing in speech, gesture and interaction. [keynote]. Talk presented at the Lorentz Centre workshop: Synchrony and Rhythmic Interaction: From Neurons to Ecology. Leiden, The Netherlands. 2019-07-29 - 2019-08-02.
-
Holler, J. (2019). Visual bodily signals for coordination in conversation. [invited talk] Contribution to the symposium 'Beyond words: Nonliteral and nonverbal aspects of dialogue'. Talk presented at the 9th Annual Meeting of the Society for Text & Discourse (ST&D9). New York, NY, USA. 2019-07-09 - 2019-07-11.
-
Blokpoel, M., Dingemanse, M., Kachergis, G., Bögels, S., Drijvers, L., Eijk, L., Ernestus, M., De Haas, N., Holler, J., Levinson, S. C., Lui, R., Milivojevic, B., Neville, D., Ozyurek, A., Rasenberg, M., Schriefers, H., Trujillo, J. P., Winner, T., Toni, I., & Van Rooij, I. (2018). Ambiguity helps higher-order pragmatic reasoners communicate. Talk presented at the 14th biannual conference of the German Society for Cognitive Science, GK (KOGWIS 2018). Darmstadt, Germany. 2018-09-03 - 2018-09-06.
-
Holler, J. (2018). Coordinating minds and social interaction with the body. [keynote]. Talk presented at the 8th International Society for Gesture Studies conference (ISGS). Cape Town, South Africa. 2018-07-04 - 2018-07-08.
-
Holler, J. (2018). Multimodal language in interaction. [invited talk]. Talk presented at the Department of Translation and Language Sciences, University Pompeu Fabra. Barcelona, Spain. 2018-06-27.
-
Holler, J. (2018). Multimodal pragmatics: Language and the body in interaction. [keynote]. Talk presented at the 22nd workshop on the Semantics and Pragmatics of Dialogue (SemDial). Aix-en-Provence, France. 2018-11-08 - 2018-11-10.
-
Holler, J. (2018). Multimodal pragmatics: Language and the body in interaction. [keynote]. Talk presented at the 10th Dubrovnik Conference in Cognitive Science on Communication, Pragmatics, and Theory of Mind (DUCOGX). Dubrovnik, Croatia. 2018-05-24 - 2018-05-27.
-
Schubotz, L., Ozyurek, A., & Holler, J. (2018). Age-related differences in multimodal recipient design. Poster presented at the 10th Dubrovnik Conference on Cognitive Science, Dubrovnik, Croatia.
-
Holler, J. (2017). Multimodal language use and comprehension in social interaction: The body is part of the package. [invited talk]. Talk presented at the Content and Reasoning in spoken dialogue and other media (CREDOG) workshop, Université Paris-Diderot. Paris, France. 2017-12-04 - 2017-12-05.
-
Holler, J. (2017). On the pragmatics of face-to-face communication: The role of the body in social cognition and social interaction. [invited talk]. Talk presented at the Centre for Linguistic Theory & Studies in Probability (CLASP), University of Gothenburg. Gothenborg, Sweden. 2017-05-29.
-
Holler, J. (2017). On the pragmatics of face-to-face communication: The role of the body in social cognition and social interaction. [invited talk]. Talk presented at the Institute of Cognitive Neuroscience, University College London. London, UK. 2017-06-12.
-
Holler, J. (2017). The role of the body in rendering conversation a cohesive activity. [invited talk]. Talk presented at the Workshop on ‘Connecting discourse in speech and gesture’, Humanities Lab, Lund University. Lund, Sweden. 2017-03-30 - 2017-03-31.
-
Hömke, P., Holler, J., & Levinson, S. C. (2017). Blinking as addressee feedback in face-to-face conversation. Talk presented at the MPI Proudly Presents series. Nijmegen, The Netherlands. 2017-06-29.
-
Hömke, P., Holler, J., & Levinson, S. C. (2017). Eye blinking as listener feedback in face-to-face communication. Talk presented at the 5th European Symposium on Multimodal Communication (MMSYM). Bielefeld, Germany. 2017-10-16 - 2017-10-17.
Additional information
Abstract -
Holler, J. (2016). On the pragmatics of multi-modal communication: Gesture, speech and gaze in the coordination of mental states and social interaction. [invited talk]. Talk presented at Centre for Research on Social Interactions, University Neuchâtel. Neuchâtel, Switzerland. 2016-05-18.
Abstract
Coordination is at the heart of human conversation. In order to interact with one another through talk, we must coordinate at many levels, first and foremost at the level of our mental states, intentions and conversational contributions. In this talk, I will present findings on the pragmatics of multi-modal communication from both production and comprehension studies. In terms of production, I will talk about (1) how co-speech gestures are used in the coordination of meaning allowing interactants to arrive at a shared understanding of the things they talk about, as well as on (2) how gesture and gaze are employed in the coordination of speaking turns in conversation, with special reference to the psycholinguistic and cognitive challenges that turn-taking poses. In terms of comprehension, I will focus on the interplay of ostensive (social gaze) and semantic (gesture) signals in the context of intention perception and language processing. My talk will bring different sets of findings together to argue for richer research paradigms that capture more of the complexities and sociality of face-to-face conversational interaction. Advancing the field of multi-modal communication research in this direction will allow us to more fully understand the psycholinguistic processes that underlie human language use and language comprehension. -
Holler, J. (2016). The role of the body in coordinating minds and utterances in interaction [invited talk]. Talk presented at the International Workshop on Language Production (IWLP 2016). La Jolla, CA, USA. 2016-07-25 - 2016-07-27.
Abstract
Human language has long been considered a unimodal
activity, with the body being considered a mere vehicle
for expressing acoustic linguistic meaning. But theories of
language evolution point towards a close link between
vocal and visual communication early on in history,
pinpointing gesture as the origin of human language.
Some consider this link between gesture and
communicative vocalisations as having been temporary,
with conventionalized linguistic code eventually replacing
early bodily signaling. Others argue for this link being
permanent, positing that even fully-fledged human
language is a multi-modal phenomenon, with visual
signals forming integral components of utterances in faceto-
face conversation. My research provides evidence for
the latter. Based on this research, I will provide insights
into some of the factors and principles governing multimodal
language use in adult interaction. My talk consists
of three parts: First, I will present empirical findings
showing that movements we produce with our body are
indeed integral to spoken language and closely linked to
communicative intentions underlying speaking. Second, I
will show that bodily signals, first and foremost manual
gestures, play an active role in the coordination of
meaning during face-to-face interaction, including
fundamental processes like the grounding of referential
utterances. Third, I will present recent findings on the role
of bodily communicative acts in the psycholinguistically
challenging context of turn-taking during conversation.
Together, the data I present form the basis of a framework
aiming to capture multi-modal language use and
processing situated in face-to-face interaction, the
environment in which language first emerged, is acquired
and used most. -
Holler, J., & Kendrick, K. H. (2016). Turn-timing and the body: Gesture speeds up conversation. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.
Abstract
Conversation is the core niche of human multi-modal language use and it is characterized by a system of taking turns. This organization poses a particular psycholinguistic challenge for its participants: considering the gap between two speaking turns averages around just 200 ms (Stivers et al., 2009) but the production of single word utterances takes a minimum of 600 ms alone (Indefrey & Levelt, 2004), language production and comprehension must largely run in parallel; while listening to an on-going turn, a next speaker has to predict the upcoming content and end of that turn to start preparing their own and launch it on time (Levinson, 2013). Recently, research has begun to investigate the cognitive processes underpinning turn-taking (see Holler et al., 2015 for an overview), but this research has focused on the spoken modality. The present study investigates the role co-speech gestures may play in this process. We analysed a corpus of 7 casual face-to-face conversations between English speakers for all question-response sequences (N=281), the gestures that accompanied the identified set of questions, and the timing of these gestures with respect to the speaking turns they accompanied. Moreover, we measured the length of all inter-turn gaps in our set. Our main research question was whether the length of the gap between turns varied systematically as a consequence of questions being accompanied by gesture. Our results revealed that this is indeed the case: Questions with a gestural component were responded to significantly faster than questions without a gestural component. This finding holds when we consider head and hand gestures separately, when we control for points of possible completion in the verbal utterance prior to turn end, and when we control for complexity associated with question type. Furthermore, our findings revealed that within the group of questions accompanied by gestures, those questions whose gestures retracted prior to turn end were responded to faster than questions whose gestures retracted following turn end. This study provides evidence that gestures accompanying spoken questions in conversation facilitate the coordination of turns. While experimental studies have demonstrated beneficial effects of gestures on language processing, this is the first evidence that gestures may benefit processing even in the rich, cognitively challenging context of conversational interaction. That is, gestures appear to play an important psycholinguistic function during immersed, in situ language processing. Experimental work is currently exploring at which level (semantic, pragmatic, perceptual) the facilitative effects we found are operating. The findings not only suggest psycholinguistic processing benefits but also expand on previous turn-taking models that restrict the function of gesture to turn-yielding/-keeping cues (Duncan, 1972) as well as on turn-taking models focusing primarily on the verbal modality (Sacks et al., 1974). -
Hömke, P., Holler, J., & Levinson, S. C. (2016). Blinking as addressee feedback in face-to-face conversation. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-22 - 2016-07-24.
-
Poliakoff, E., Humphries, S., Crawford, T., & Holler, J. (2016). Varying the degree of motion in actions influences gestural action depictions in Parkinson’s disease. Talk presented at the British Neuropsychological Society Autumn Meeting. London, UK. 2016-10-26 - 2016-10-27.
Abstract
In communication, speech is often accompanied by co-speech gestures, which embody a link between language and action. Language impairments in Parkinson’s disease (PD) are particularly pronounced for action-related words in comparison to nouns. People with PD produce fewer gestures from a first-person perspective when they describe others’ actions (Humphries et al., 2016), which may reflect a difficulty in simulation. We extended this to investigate the gestural depiction of other types of action information such as “manner” (how an action is performed) and “path” (the trajectory of a moving figure in space). We also explored whether the level of motion required to perform an action influences the way that people with PD use gestures to depict those actions. 37 people with PD and 35 age-matched controls viewed a cartoon which included low motion actions (e.g. hiding, knocking) and high motion actions (e.g. running, climbing), and described it to an addressee. We analysed the co-speech gestures they spontaneously produced while doing so. Overall gesture rate was similar in both groups, but people with PD produced action-gestures at a significantly lower rate than controls in both motion conditions. Also, people with PD produced significantly fewer manner and first-person action gestures than controls in the high motion condition (but not the low motion condition). Our findings suggest that motor impairments in PD contribute to the way in which actions, especially high motion actions, are depicted gesturally. Thus, people with Parkinson’s may have particular difficulty cognitively representing high motion actions -
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2016). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
-
Holler, J. (2015). Gesture, gaze and the body in the coordination of turns in conversation. [invited talk]. Talk presented at Contribution to the Symposium ‘How cognition supports social interaction: From joint action to dialogue’, 19th Conference of the European Society for Cognitive Psychology (ESCoP). Paphos, Cyprus. 2015-09-17 - 2015-09-20.
Abstract
Human language has long been considered a unimodal activity, with the body being considered a mere vehicle
for expressing acoustic linguistic meaning. But theories of
language evolution point towards a close link between
vocal and visual communication early on in history,
pinpointing gesture as the origin of human language.
Some consider this link between gesture and
communicative vocalisations as having been temporary,
with conventionalized linguistic code eventually replacing
early bodily signaling. Others argue for this link being
permanent, positing that even fully-fledged human
language is a multi-modal phenomenon, with visual
signals forming integral components of utterances in faceto-
face conversation. My research provides evidence for
the latter. Based on this research, I will provide insights
into some of the factors and principles governing multimodal
language use in adult interaction. My talk consists
of three parts: First, I will present empirical findings
showing that movements we produce with our body are
indeed integral to spoken language and closely linked to
communicative intentions underlying speaking. Second, I
will show that bodily signals, first and foremost manual
gestures, play an active role in the coordination of
meaning during face-to-face interaction, including
fundamental processes like the grounding of referential
utterances. Third, I will present recent findings on the role
of bodily communicative acts in the psycholinguistically
challenging context of turn-taking during conversation.
Together, the data I present form the basis of a framework
aiming to capture multi-modal language use and
processing situated in face-to-face interaction, the
environment in which language first emerged, is acquired
and used most. -
Holler, J., & Kendrick, K. H. (2015). Gesture, gaze, and the body in the organisation of turn-taking for conversation. Talk presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015). Paphos, Cyprus. 2015-09-17 - 2015-09-20.
Abstract
The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action. This most basic observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation? To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adults, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998).
The corpus retains the spontaneity and naturalness of everyday talk as much as possible while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis.
The analyses focus on the use and interplay of visual bodily resources, including eye gaze, co-speech gestures, and body posture, during conversational coordination, as well as on how these signals interweave with participants’ turns at talk. The results provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context, and into how concurrent visual and verbal resources are involved in turn construction and turn allocation. This project will add to our understanding of core issues in the field of CA, such as by elucidating the role of multi-modality and number of participants engaged in talk-in-interaction (Schegloff, 2009).
References
Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23, 283-92.
Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press.
Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22-63.
Lerner, G. H. (2003). Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society, 32(02), 177–201.
Mondada, L. (2007). Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies, 9, 195-226.
Oloff, F. (2013). Embodied withdrawal after overlap resolution. Journal of Pragmatics, 46, 139-156.
Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Sacks, H., & Schegloff, E. (2002). Home position. Gesture, 2, 133-146.
Schegloff, E. (1998). Body torque. Social Research, 65, 535-596.
Schegloff, E. (2009). One perspective on Conversation Analysis: Comparative perspectives. In J. Sidnell (ed.), Conversation Analysis: Comparative perspectives, pp. 357-406. Cambridge: Cambridge University Press.
-
Holler, J., & Kendrick, K. H. (2015). Gesture, gaze, and the body in the organisation of turn-taking for conversation. Poster presented at the 14th International Pragmatics Conference, Anwerp, Belguim.
Abstract
The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action. This most basic observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation? To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adults, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998). The corpus retains the spontaneity and naturalness of everyday talk as much as possible while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis. The analyses focus on the use and interplay of visual bodily resources, including eye gaze, co-speech gestures, and body posture, during conversational coordination, as well as on how these signals interweave with participants’ turns at talk. The results provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context, and into how concurrent visual and verbal resources are involved in turn construction and turn allocation. This project will add to our understanding of core issues in the field of CA, such as by elucidating the role of multi-modality and number of participants engaged in talk-in-interaction (Schegloff, 2009). References Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23, 283-92. Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press. Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22-63. Lerner, G. H. (2003). Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society, 32(02), 177–201. Mondada, L. (2007). Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies, 9, 195-226. Oloff, F. (2013). Embodied withdrawal after overlap resolution. Journal of Pragmatics, 46, 139-156. Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen. Sacks, H., & Schegloff, E. (2002). Home position. Gesture, 2, 133-146. Schegloff, E. (1998). Body torque. Social Research, 65, 535-596. Schegloff, E. (2009). One perspective on Conversation Analysis: Comparative perspectives. In J. Sidnell (ed.), Conversation Analysis: Comparative perspectives, pp. 357-406. Cambridge: Cambridge University Press. -
Holler, J. (2015). On the pragmatics of multi-modal face-to-face communication: Gesture, speech and gaze in the coordination of mental states and social interaction. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-04.
Abstract
Coordination is at the heart of human conversation. In order to interact with one another through talk, we must coordinate at many levels, first and foremost at the level of our mental states, intentions and conversational contributions. In this talk, I will present findings on the pragmatics of multi-model communication from both production and comprehension studies. In terms of production, I will throw light on (1) how co-speech gestures are used in the coordination of meaning to allow interactants to arrive ate a shared understanding of the thins we talk about, as well as on (2) how gesture and gaze are employed in the coordination of speaking turns in spontaneous conversation, with special reference to the psycholinguistic and cognitive challenges that turn-taking poses. In terms of comprehension, I will focus on communicative intentions and the interplay of ostensive and semantic multi-model signals in triadic communication contexts. My talk will bring these different findings together to make the argument for richer reearch paradigms that capture more of the complexities and sociality of face-to-face conversational interactoin. Advancing the field of multi-model communication in this way will allow us to more fully understand the psycholinguistic processes that underlie human language use and language comprehension. -
Holler, J. (2015). Visible communicative acts in the coordination of interaction. [invited talk]. Talk presented at Institute for Language Sciences, Cologne University. Cologne, Germany. 2015-06-11.
-
Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face conversation. Talk presented at the 6th Joint Action Meeting. Budapest, Hungary. 2015-07-01 - 2015-07-04.
-
Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Poster presented at the 19th Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2015 / goDIAL), Gothenburg, Sweden.
-
Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Talk presented at the Nijmegen-Tilburg Multi-modality workshop. Tilburg, The Netherlands. 2015-10-22.
-
Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Talk presented at the Donders Discussions Conference. Nijmegen, The Netherlands. 2015-11-05.
-
Humphries, S., Holler, J., Crawford, T., & Poliakoff, E. (2015). Investigating gesture viewpoint during action description in Parkinson’s Disease. Talk presented at the Research into Imagery and Observation Conference. Stirling, Scotland. 2015-05-14 - 2015-05-15.
-
Humphries, S., Holler, J., Crawfort, T., & Poliakoff, E. (2015). Investigating gesture viewpoint during action description in Parkinson’s Disease. Talk presented at the School of Psychological Sciences PGR Conference. Manchester, England.
-
Kendrick, K. H., & Holler, J. (2015). Triadic participation in question-answer sequences. Talk presented at Revisiting Participation – Language and Bodies in Interaction workshop. Basel, Switzerland. 2015-06-24 - 2015-06-27.
-
Kendrick, K. H., & Holler, J. (2015). Triadic participation in question-answer sequences. Talk presented at the 14th International Pragmatics Conference. Antwerp, Belgium. 2015-07-26 - 2015-07-31.
-
Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-05.
-
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at Donders Sessions 2015, Nijmegen, The Netherlands.
-
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Talk presented at Donders Discussions 2015. Nijmegen, The Netherlands. 2015-11-05.
-
Holler, J., & Kendrick, K. H. (2014). Gaze and the organization of turn-taking in triadic face-to-face interaction. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS 6). San Diego, CA, USA. 2014-07-08 - 2014-07-11.
Abstract
The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action (Clark, 1996). This observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation’o To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adult native English speakers, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998). The corpus retains much of the spontaneity and naturalness of everyday talk while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis. All data have been transcribed, coded for co-speech gestures and gaze fixations on a frame-by-frame basis. The large amount of data obtained from this corpus is currently being analysed both qualitatively and quantitatively. The project aims to shed light on the cognitive puzzle that turn-taking presents us with (Levinson, 2013); interlocutors are confronted with the challenge of comprehending an on-going turn while, at the same time, planning a response and estimating when the current speaker’s talk will end in order to time their contribution as precisely as possible (the average gap between turns is a mere 200ms). The results from this project provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context. Our findings so far show that co-speech gestures may play an important role in this process by guiding the projection of upcoming turn boundaries and next actions. In all, this project elucidates the role of multi-modality in the organisation of turns at talk and in the cognitive processes that underlie this organisation -
Holler, J., & Kendrick, K. H. (2014). Gesture, gaze, and the body in the organisation of turn-taking for conversation: Insights from a corpus using new technologies. Talk presented at the 4th International Conference on Conversation Analysis (ICCA14). Los Angeles, CA, USA. 2014-06-25 - 2014-06-29.
Abstract
The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action. This most basic observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation? To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adults, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998). The corpus retains the spontaneity and naturalness of everyday talk as much as possible while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis. The analyses focus on the use and interplay of visual bodily resources, including eye gaze, co-speech gestures, and body posture, during conversational coordination, as well as on how these signals interweave with participants’ turns at talk. The results provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context, and into how concurrent visual and verbal resources are involved in turn construction and turn allocation. This project will add to our understanding of core issues in the field of CA, such as by elucidating the role of multi-modality and number of participants engaged in talk-in-interaction (Schegloff, 2009). References Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23, 283-92. Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press. Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22-63. Lerner, G. H. (2003). Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society, 32(02), 177–201. Mondada, L. (2007). Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies, 9, 195-226. Oloff, F. (2013). Embodied withdrawal after overlap resolution. Journal of Pragmatics, 46, 139-156. Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen. Sacks, H., & Schegloff, E. (2002). Home position. Gesture, 2, 133-146. Schegloff, E. (1998). Body torque. Social Research, 65, 535-596. Schegloff, E. (2009). One perspective on Conversation Analysis: Comparative perspectives. In J. Sidnell (ed.), Conversation Analysis: Comparative perspectives, pp. 357-406. Cambridge: Cambridge University Press. -
Holler, J. (2014). How communicative intent influences adults’ co-speech gestures. Talk presented at the 4th Nijmegen Gesture Centre Workshop: Communicative intention in gesture and action. Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands. 2014-06-04 - 2014-06-05.
-
Holler, J. (2014). Social psycholinguistics: multi-modal language use and language comprehension in situ. Talk presented at the Multimodality in Interaction & Discourse workshop. University of Leuven, Belgium.
-
Humphries, S., Holler, J., Crawford, T., & Poliakoff, E. (2014). Representing actions in co-speech gestures in Parkinson's Disease. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.
Abstract
Parkinson’s disease (PD) is a progressive, neurological dis- order caused by the loss of dopaminergic cells in the basal ganglia, which is involved in motor control. This leads to the cardinal motor symptoms of PD: tremor, bradykinesia (slow- ness of movement), rigidity and postural instability. PD also leads to general cognitive impairment (executive function, memory, visuospatial abilities), and language impairments; PD patients perform worse at language tasks such as provid- ing word definitions and naming objects, generating lists of verbs, and naming actions. Thus, there seems to be a par- ticular impairment for action-language. Despite the fact that action and language are both impaired in PD, little research has explored if and how co-speech gestures, which embody a link between these two domains, are affected. The Ges- ture as Simulated Action hypothesis argues that gestures arise from cognitive representations or simulations of actions. It has been argued that people with PD may be less able to cog- nitively represent, simulate and imagine actions, which could account for their action-language impairment and may also mean that gestures are affected. Recently, it has been shown that while there is not a straightforward reduction in gesture use in PD, patients’ gestures which described actions are less precise/informative than those of controls. However, partici- pants only described two actions, and to a knowing addressee (so the task was not communicative). The present study extended this by asking participants to describe a wide range of actions in an apparently commu- nicative task, and compared viewpoint as well as precision between the two groups. Gesture viewpoint was examined in order to provide a window into the cognitive representa- tions underlying gesture, by demonstrating whether or not the speaker has placed themselves as the agent within the ac- tion (character viewpoint), requiring a cognitive simulation of the action. Overall, studying gestures in PD has clinical relevance, and will provide insight into the cognitive basis of gestures in healthy people. 25 PD patients and 25 age-matched controls viewed 10 pictures and 10 videos depicting a range of actions and de- scribed them to help an addressee identify the correct stimu- lus. No difference in the rate of gesture production between the two groups was found. However, the precision of ges- tures describing actions was found to be significantly lower in the PD group. Furthermore, the proportion of gestures produced from character viewpoint was found to differ be- tween the groups, with PD patients producing significantly less C-VPT gestures. This suggests that the cognitive repre- sentations underlying the gestures have changed in PD, and that people with PD are less able to imagine themselves as the agent of the action. This supports the GSA hypothesis by demonstrating that gesture production changes when the abil- ity to perform and to cognitively simulate actions is impaired. Our next study will assess the relationships between cognitive factors affected in PD and gesture, and motor imagery ability and gesture. The study will also examine gestures produced by people with PD when describing a wide range of semantic content in various communicative situations. -
Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th International Society for Gesture Studies Congress. San Diego, California, USA. 2014-07-08 - 2014-07-11.
Abstract
Background: Gesture and speech are theorized to form a sin- gle integrated system of meaning during language produc- tion (McNeill, 1992), and evidence is mounting that this in- tegration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multi- modal videos that presented auditory (words) and visual (ges- tures and actions on objects) information. Half the subjects related the audio information to a written prime presented be- fore the video, and the other half related the visual informa- tion to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one but- ton if the written prime was the same as the visual (31 sub- jects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming se- quence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual tar- get tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p < .001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying ac- tions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and atten- tional processes). However, there were no significant differ- ences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and ex- tends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in rela- tion to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension. -
Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS 6). San Diego, CA, USA. 2014-07-08 - 2014-07-11.
Abstract
Background: Gesture and speech are theorized to form a single integrated system of meaning during language production (McNeill, 1992), and evidence is mounting that this integration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one button if the written prime was the same as the visual (31 subjects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming sequence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual target tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p<.001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying actions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and attentional processes). However, there were no significant differences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and extends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in relation to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension -
Kendrick, K. H., & Holler, J. (2014). Triadic participation in question-answer sequences. Talk presented at the Anéla Study Group for Discourse Analysis (AWIA) Symposium. Universiteit Utrecht, The Netherlands. 2014-10-02 - 2014-10-02.
-
Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language [SNL2014], Amsterdam, the Netherlands.
-
Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS6). San Diego, Cal. 2014-07-08 - 2014-07-11.
-
Rowbotham, S., Holler, J., Wearden, A., & Lloyd, D. (2014). I see how you feel: Speakers’ gestures help people to understand their pain. Talk presented at the 6th International Society for Gesture Studies Congress. San Diego, California, USA. 2014-07-08 - 2014-07-11.
Abstract
Pain is a frequent feature of medical consultations and must be communicated effectively if health care providers are to understand the experience and provide treatment. However, pain is difficult to verbalise and spoken pain descriptions are subject to misinterpretation (Schott, 2004). It is well known that when we speak we also produce co-speech hand ges- tures, and during pain communication these gestures have been found to depict aspects of pain that are not contained in the accompanying speech, such as location, sensation and cause of pain (Rowbotham et al., 2012, 2013a, 2013b). Al- though recipients are known to be able to comprehend the information contained in gestures produced during descrip- tions of concrete entities and events (see Hostetter, 2011 for a review), it is not yet known whether this is the case for sub- jective experiences such as pain. We investigated whether un- trained observers are able to glean any additional information from the gestures that accompany spoken pain descriptions, and whether this can be enhanced through a short instruction session on co-speech gestures. Participants (n = 30 per condi- tion) viewed 20 short video clips (mean length = 7.5 seconds) of pain descriptions under one of three presentation condi- tions: 1) Speech Only, 2) Speech and Gesture, or 3) Speech and Gesture plus Instruction (a short presentation, prior to the video clips, explaining what co-speech gestures are and the types of pain information they can depict). Following each clip, participants provided a free-text description of the pain and a “traceable additions” analysis (Kelly et al., 2002) was used to assess whether participants’ descriptions contained any information that was uniquely contained in gestures in the original clips. Participants who had received instruction in co-speech gestures (Speech and Gesture plus Instruction con- dition) obtained the most information from gestures, while those who did not have access to gestures (Speech Only con- dition) obtained the least. There were no differences in the amount of information obtained from speech across the con- ditions, suggesting that neither having access to gestures nor being instructed to attend to these has any detrimental effect on pain understanding. These results suggest that attending to the speaker’s gestures during pain communication can en- hance the recipients understanding of this subjective experi- ence. These findings have important implications for com- munication in medical settings, suggesting that health care professionals may benefit from training in co-speech gestures in order to improve their understanding of patients’ pain ex- periences -
Schubotz, L., Holler, J., & Ozyurek, A. (2014). The impact of age and mutually shared knowledge on multi-modal utterance design. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.
Abstract
Previous work suggests that the communicative behavior
of older adults differs systematically from that of younger
adults. For instance, older adults produce significantly fewer
representational gestures than younger adults in monologue
description tasks (Cohen & Borsoi, 1996; Feyereisen &
Havard, 1999). In addition, older adults seem to have more
difficulty than younger adults in establishing common ground
(i.e. knowledge, assumptions, and beliefs mutually shared
between a speaker and an addressee, Clark, 1996) in speech
in a referential communication paradigm (Horton & Spieler,
2007). Here we investigated whether older adults take such
common ground into account when designing multi-modal
utterances for an addressee. The present experiment com-
pared the speech and co-speech gesture production of two age
groups (young: 20-30 years, old: 65-75 years) in an inter-
active setting, manipulating the amount of common ground
between participants.
Thirty-two pairs of nave participants (16 young, 16 old,
same-age-pairs only) took part in the experiment. One of the
participants (the speaker) narrated short cartoon stories to the
other participant (the addressee) (task 1) and gave instruc-
tions on how to assemble a 3D model from wooden building
blocks (task 2). In both tasks, we varied the amount of infor-
mation mutually shared between the two participants (com-
mon ground manipulation). Additionally, we also obtained a
range of cognitive measures from the speaker: verbal work-
ing memory (operation span task), visual working memory
(visual patterns test and Corsi block test), processing speed
and executive functioning (trail making test parts A + B) and
a semantic fluency measure (animal naming task). Prelimi-
nary data analysis of about half the final sample suggests that
overall, speakers use fewer words per narration/instruction
when there is shared knowledge with the addressee, in line
with previous findings (e.g. Clark & Wilkes-Gibbs, 1986).
This effect is larger for young than for old adults, potentially
indicating that older adults have more difficulties taking com-
mon ground into account when formulating utterances. Fur-
ther, representational co-speech gestures were produced at the
same rate by both age groups regardless of common ground
condition in the narration task (in line with Campisi & zyrek,
2013). In the building block task, however, the trend for the
young adults is to gesture at a higher rate in the common
ground condition, suggesting that they rely more on the vi-
sual modality here (cf. Holler & Wilkin, 2009). The same
trend could not be found for the old adults. Within the next
three months, we will extend our analysis a) by taking a wider
range of gesture types (interactive gestures, beats) into ac-
count and b) by looking at qualitative features of speech (in-
formation content) and co-speech gestures (size, shape, tim-
ing). Finally, we will correlate the resulting data with the data
from the cognitive tests.
This study will contribute to a better understanding of the
communicative strategies of a growing aging population as
well as to the body of research on co-speech gesture use in
addressee design. It also addresses the relationship between
cognitive abilities on the one hand and co-speech gesture
production on the other hand, potentially informing existing
models of co-speech gesture production. -
Tutton, M., & Holler, J. (2014). Gesturing when common ground exists: Is gesture rate determined by cognitive load or communicative context?. Talk presented at the 5th UK Cognitive Linguistics Conference. Lancaster, UK. 2014-07-29 - 2014-07-31.
Abstract
Common ground (CG), i.e. the knowledge, beliefs and assumptions that interlocutors mutually share in interaction, is fundamental t o successful communication (Clark, 1996). An increasing number of studies have shown that speakers use co - speech gestures at the same rate (or even higher) when they share CG as opposed to when they do not (e.g. Campisi & Ozyurek, 2013; Holler & Wilkin, 20 09; Holler, Tutton & Wilkin, 2011). There are two alternative explanations for this finding. On the one hand, it has been argued that mentally representing our addressee’s knowledge can require considerable cognitive effort (Pickering & Garrod, 2004). Henc e, gesture rate may be high in CG contexts because the cognitive effort involved in mentally representing CG is considerable. In contrast, this high gesture rate may be due to the fact that gestures play an important communicative role, even when conveying information that is already mutually shared (Holler & Wilkin, 2009). The present study tested these two hypotheses by combining the manipulation of CG with a manipulation of communicative context. We used a 2(CG) x 3(communication context) between - partici pants design (18 participants per condition, N=108). All participants watched a short film and narrated it to their addressee. Addressees had either seen parts of the film together with the speaker (CG) or not (no - CG). In addition, we manipulated communica tive context by asking speakers to narrate their story either face - to - face, via an occluding screen, or into a tape - recorder, a manipulation that has been shown to affect gesture rate in no - CG contexts (Bavelas et al., 2008). Our results revealed a signifi cant main effect of communicative context, with gesture rate being highest in the face - to - face condition, followed by the screen condition, and lowest in the tape - recorder condition. Importantly, we did not find a main effect of common ground on gesture ra te, and no interaction between our two factors. This finding supports the hypothesis that gestures representing CG information are communicatively intended as opposed to being triggered by an increased cognitive load. -
Tutton, M., & Holler, J. (2014). Visualising common ground: for communication or cognition?. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS 6). San Diego, CA, USA. 2014-07-08 - 2014-07-11.
Abstract
Common ground (CG), i.e., the knowledge, beliefs and assumptions interlocutors mutually share in interaction, is fundamental to successful communication (Clark, 1996). Next to studies finding gestural ellipsis in the context of CG, an increasing number of studies has shown that speakers use co-speech gestures at the same rate (or even a higher one) when they do compared to when they do not share CG with their interlocutor (e.g. Campisi & Ozyurek, 2013; Holler & Wilkin, 2009; Holler, Tutton & Wilkin, 2011). Common ground (CG), i.e. the knowledge, beliefs and assumptions that interlocutors mutually share in interaction, is fundamental to successful communication (Clark, 1996). In contrast to studies that have found gestural ellipsis when a speaker shares CG with an interlocutor, an increasing number of studies have shown that speakers use co-speech gestures at the same rate (or even higher) when they share CG as opposed to when they do not (e.g. Campisi & Ozyurek, 2013; Holler & Wilkin, 2009; Holler, Tutton & Wilkin, 2011). There are two alternative explanations for this finding. On the one hand, it has been argued that mentally representing our addressee’s knowledge can require considerable cognitive effort (Pickering & Garrod, 2004). In combination with evidence that gesturing helps to reduce cognitive load in cognitively effortful tasks (e.g., Goldin-Meadow, 1999), one hypothesis is that gesture rate is high in CG contexts because cognitive effort involved in mentally representing CG is high. This contrasts markedly with the hypothesis that gesture rate remains high when CG exists because the gestures play an important communicative role even when they are conveying information that is mutually shared (Holler & Wilkin, 2009). There are two alternative explanations for this finding. On the one hand, it has been argued that mentally representing our addressee’s knowledge can require considerable cognitive effort (Pickering & Garrod, 2004). In combination with evidence that gesturing helps to reduce cognitive load in cognitively effortful tasks (e.g. Goldin-Meadow, 1999), one hypothesis is that gesture rate is high in CG contexts because the cognitive effort involved in mentally representing CG is high. This contrasts markedly with the hypothesis that gesture rate remains high when CG exists because gestures play an important communicative role, even when conveying information that is already mutually shared (Holler & Wilkin, 2009). The present study tested these two hypotheses by combining the manipulation of CG with a manipulation of communicative context. We used a 2(CG) x 3(communication context) between-participants design (18 participants per condition, N=108). All participants watched a short film and narrated it to their addressee. Addressees had either seen parts of the film together with the speaker (CG) or not (no- CG). In addition, we manipulated communication context by asking speakers to narrate their story either face-to-face, via an occluding screen, or into a tape-recorder, a manipulation that has been shown to affect gesture rate in no-CG contexts (Bavelas et al., 2008). If gestures produced in CG contexts are triggered by the cognitive effort of having to mentally represent CG, then social manipulations of this kind should not influence gesture rate in. If gestures conveying information already in CG are communicatively intended, however, then we would expect gesture rate to be different in the three conditions. Our results revealed a significant main effect of social context, with gesture rate being highest in the face-toface condition, followed by the screen condition, and lowest in the tape-recorder condition. Importantly, we did not find a main effect of common ground on gesture rate, and no interaction between our two factors.The present study tested these two hypotheses by combining the manipulation of CG with a manipulation of communicative context. We used a 2(CG) x 3 (communication context) between-participants design (18 participants per condition, N=108). All participants watched a short film and narrated it to their addressee. Addressees had either seen parts of the film together with the speaker (CG) or not (no-CG). In addition, we manipulated communicative context by asking speakers to narrate their story either faceto- face, via an occluding screen, or into a tape-recorder, a manipulation that has been shown to affect gesture rate in no- CG contexts (Bavelas et al., 2008). If gestures produced in CG contexts are triggered by the cognitive effort of having to mentally represent CG, then social manipulations of this kind should not influence gesture rate. However, if gestures conveying information already in CG are communicatively intended, then we would expect gesture rate to be different in the three conditions. Our results provide several insights. Firstly, they add to the growing body of evidence for maintained/high gesture rate in some common ground contexts. Secondly, they replicate effects of visual access and dialogue on gesture rate found in earlier studies manipulating social interaction. Thirdly, and most importantly, this social interaction effect affected gesture rates in both the common ground and no-common ground conditions equally. This finding is compatible with the account that gestures representing CG information are communicatively intended but not with a cognitive effort-based explanation. Our results revealed a significant main effect of communicative context, with gesture rate being highest in the face-to-face condition, followed by the screen condition, and lowest in the tape-recorder condition. Importantly, we did not find a main effect of common ground on gesture rate, and no interaction between our two factors -
Wilby, F., Riddell, C., Lloyd, D., Wearden, A., & Holler, J. (2014). Naming with words and gestures in children with Down Syndrome. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.
Abstract
Several researchers have shown a close relationship between gesture and language in typically developing children and in children with developmental disorders involving delayed or impaired linguistic abilities. Most of these studies reported that, when children are limited in cognitive, linguistic, met- alinguistic, and articulatory skills, they may compensate for some of these limitations with gestures (Capone & McGre- gor, 2004). Some researchers also highlighted that children with Down Syndrome (DS) show a preference for nonver- bal communication using more gestures with respect to typi- cally developing (TD) children (Stefanini, Caselli & Volterra, 2011). The present study investigates the lexical comprehen- sion and production abilities as well as the frequency and the form of gestural production in children with DS. In partic- ular, we are interested in the frequency of gesture produc- tion (deictic and representational) and the types of represen- tational gesture produced. Four gesture types were coded, including own body, size and shape, body-part-as object and imagined-object. Fourteen children with DS (34 months of developmental age, 54 months of chronological age) and a comparison group of 14 typically developing children (TD) (29 months of chronological age) matched for gender and de- velopmental age were assessed through the parent question- naire MB-CDI and a direct test of lexical comprehension and production (PiNG). Children with DS show a general weak- ness in lexical comprehension and production. As for the composition of the lexical repertoire, for both groups of chil- dren, nouns are understood and produced in higher percent- ages compared to predicates. Children with DS produce more representational gestures than TD children in the comprehen- sion task and above all with predicates; on the contrary, both groups of children exhibit the same number of gestures on the MB-CDI and in the lexical production task. Children with DS produced more unimodal gestural answers than the con- trol group. Children from both groups produced all four ges- ture types (own body 53%, size and shape 9%, body-part-as object 25 %, and imagined-object 14%). Chi-square analy- sis revealed no significant difference in the type of gesture produced between the two groups of children for both lex- ical categories. For both groups the distribution of gesture types reflects an item effect (eg. 100% of gesture produced for the pictures lion, kissing and washing were own body and 100% of the pictures produce for small and long were size and shape). For some item (e.g. comb, talking on the phone) chil- dren in both groups produced both types (body-part-as object and imagined-object) with similar frequency. These data on the types of representational gestures produced by the two groups show a similar conceptual representation in TD chil- dren and in children with DS despite a greater impairment of the spoken linguistic abilities in the letter. Future investiga- tions, are needed to confirm these preliminary results. -
Holler, J. (2013). Gesture use in social context: The influence of common ground on co-speech gesture production in dyadic interaction. Talk presented at the Humanities Lab, Lund University. Lund, Sweden.
-
Holler, J. (2013). Gesture use in social context: The influence of common ground on co-speech gesture production in dyadic interaction. Talk presented at Laboratoire Parole et Langage. Université Aix-Marseille. Aix-en-Provence, France.
Share this page