Gaze and the organization of turn-taking in triadic face-to-face interaction

Holler, J., & Kendrick, K. H. (2014). Gaze and the organization of turn-taking in triadic face-to-face interaction. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS 6). San Diego, CA, USA. 2014-07-08 - 2014-07-11.
The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action (Clark, 1996). This observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation’o To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adult native English speakers, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998). The corpus retains much of the spontaneity and naturalness of everyday talk while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis. All data have been transcribed, coded for co-speech gestures and gaze fixations on a frame-by-frame basis. The large amount of data obtained from this corpus is currently being analysed both qualitatively and quantitatively. The project aims to shed light on the cognitive puzzle that turn-taking presents us with (Levinson, 2013); interlocutors are confronted with the challenge of comprehending an on-going turn while, at the same time, planning a response and estimating when the current speaker’s talk will end in order to time their contribution as precisely as possible (the average gap between turns is a mere 200ms). The results from this project provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context. Our findings so far show that co-speech gestures may play an important role in this process by guiding the projection of upcoming turn boundaries and next actions. In all, this project elucidates the role of multi-modality in the organisation of turns at talk and in the cognitive processes that underlie this organisation
Publication type
Talk
Publication date
2014

Share this page