Judith Holler delivers keynote at Interspeech 2025

01 September 2025
Judith Holler at Interspeech
Judith Holler, Senior Investigator in the Multimodal Language Department, delivered a keynote address at Interspeech 2025, held in Rotterdam from 17–21 August.

This year’s conference, themed 'Fair and Inclusive Speech Science and Technology', celebrated the richness of speech diversity across individuals and languages, with a focus on advancing research and technology in equitable and inclusive ways.

In her keynote, Holler explored how hand gestures, facial expressions, and head movements are organized to convey meaning in conversation, and how their presence and timing shape comprehension and response. Drawing on complementary methodologies, she presented:

  • multimodal corpus studies (qualitative and quantitative), showing how visual signals often precede speech.
  • Experimental comprehension studies (including behavioral and EEG methods), inspired by corpus findings, using multimodally animated virtual characters to test causal effects of bodily signals on comprehension.

Her findings demonstrate that visual bodily signals are not merely supplementary, but rather an integral part of how semantic and pragmatic meaning is communicated in interaction. These signals enhance language processing, particularly because of their timing and predictive potential within conversational flow.

Holler’s keynote highlighted the central role of multimodal communication in understanding human language and its implications for developing fair and inclusive speech technologies.
 

Share this page