Judith Holler


  • Hintz, F., Strauß, A., Khoe, Y., & Holler, J. (2023). Language prediction in multimodal contexts: The contribution of iconic gestures to anticipatory sentence comprehension. OSF Preprints. doi:10.17605/OSF.IO/679TM.


    There is a growing body of research demonstrating that during comprehension, language users
    predict upcoming information. Prediction has been argued to facilitate dialog in that listeners try to
    predict what the speaker will say next to be able to plan their own utterance early. Such behavior may
    enable smooth transitions between turns in conversation. In face-to-face dialog, speakers produce a
    multitude of visual signals, such as manual gestures, in addition to speech. Previous studies have shown
    that comprehenders integrate semantic information from speech and corresponding iconic gestures when
    these are presented simultaneously. However, in natural conversation, iconic gestures often temporally
    precede their corresponding speech units with substantial lags. Given the temporal lags in gesture-
    speech timing and the predictive nature of language comprehension, a recent theoretical framework
    proposed that listeners exploit iconic gestures in the service of predicting upcoming information. The
    proposed study aims to test this proposal. We will record electroencephalogram from 80 Dutch adults
    while they are watching videos of an actress producing discourses. The stimuli consist of an
    introductory and a target sentence; the latter contains a target noun. Depending on the preceding
    discourse, the target noun is either predictable or not. Each target noun is paired with an iconic gesture
    whose presentation in the video is timed such that the gesture stroke precedes the onset of the spoken
    target either by 520 ms (earlier condition) or by 130 ms (later condition). Analyses of event-related
    potentials preceding and following target onset will reveal whether and to what extent targets were pre-
    activated by iconic gestures. If the findings reveal support for the notion that iconic co-speech gestures
    contribute to predictive language comprehension, they lend support for the recent theoretical framework
    of face-to-face conversation and offer one possible explanation for the smooth transitions between turns
    in natural dialog.
  • Hömke, P., Levinson, S. C., & Holler, J. (2022). Eyebrow movements as signals of communicative problems in human face-to-face interaction. PsyArXiv, 10.31234/osf.io/3jnmt. doi:10.31234/osf.io/3jnmt.


    Repair is a core building block of human communication, allowing us to address problems of understanding in conversation. Past research has uncovered the basic mechanisms by which interactants signal and solve such problems. However, the focus has been on verbal interaction, neglecting the fact that human communication is inherently multimodal. Here, we focus on a visual signal particularly prevalent in signaling problems of understanding: eyebrow frowns and raises. We present a corpus study showing that verbal repair initiations with eyebrow furrows are more likely to be responded to with clarifications as repair solutions, repair initiations that were preceded by eyebrow actions as preliminaries get repaired faster (around 230 ms), and eyebrow furrows alone can be sufficient to occasion clarification. We also present an experiment based on virtual reality technology, revealing that addressees’ eyebrow frowns have a striking effect on speakers’ speech, leading them to produce answers to questions several seconds longer than when not perceiving addressee eyebrow furrows. Together, the findings demonstrate that eyebrow movements play a communicative role in initiating repair in spoken language rather than being merely epiphenomenal. Thus, they should be considered as core coordination devices in human conversational interaction.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2020). The predictive potential of hand gestures during conversation: An investigation of the timing of gestures in relation to speech. PsyArXiv Preprints. doi:10.31234/osf.io/b5zq7.


    In face-to-face conversation, recipients might use the bodily movements of the speaker (e.g. gestures) to facilitate language processing. It has been suggested that one way through which this facilitation may happen is prediction. However, for this to be possible, gestures would need to precede speech, and it is unclear whether this is true during natural conversation. In a corpus of Dutch conversations, we annotated hand gestures that represent semantic information and occurred during questions, and the word(s) which corresponded most closely to the gesturally depicted meaning. Thus, we tested whether representational gestures temporally precede their lexical affiliates. Further, to see whether preceding gestures may indeed facilitate language processing, we asked whether the gesture-speech asynchrony predicts the response time to the question the gesture is part of. Gestures and their strokes (most meaningful movement component) indeed preceded the corresponding lexical information, thus demonstrating their predictive potential. However, while questions with gestures got faster responses than questions without, there was no evidence that questions with larger gesture-speech asynchronies get faster responses. These results suggest that gestures indeed have the potential to facilitate predictive language processing, but further analyses on larger datasets are needed to test for links between asynchrony and processing advantages.

Share this page