Judith Holler

Biography

My research program investigates human language in the very environment in which it has evolved, is acquired, and used most: face-to-face interaction. Within this context, my focus is on the semantics and pragmatics of human communication from a multimodal perspective considering spoken language within the rich, visual infrastructure that embeds it, such as manual gestures, head movements, facial signals, and gaze. The core question my research focuses on is how we encode, decode and align on meaning in such a multimodal environment, and how social cognition and interactional processes (e.g. pragmatic inferencing, communicative intentions, common ground, turn-taking, grounding, recipient design) shape multimodal language and the cognitive processes that underpin it. This will allow us to understand how meaning (from individual semantic constituents to speech acts) is produced and understood within the design space in which human communication typically operates and the constraints and affordances that result for psycholinguistic processing.

My research focus on situated psycholinguistics is based on an interdisciplinary approach combining the micro-analysis of multimodal language, CA-informed analyses of conversational interaction, psycholinguistics and neuroscience. In terms of methodology, it uses cutting-edge techniques (e.g. motion capture, Virtual Reality, dual-EEG, mobile eye-tracking), and is enriched by combining qualitative and quantitative language corpus analyses with controlled experimentation, including dialogic experimental paradigms for both production and comprehension.  

Recently, I was awarded a European Research Council consolidator grant funding the current CoAct (Communication in Action) project which investigates the multimodal architecture of speech acts (in terms of the form and timing of the visual and verbal signals that constitute them) and their cognitive processing, the role of visual communicative signals in predictive language processing and turn-taking, as well as the cognitive and neural mechanisms that govern the binding and segregation of multimodal signals and gestalts.

Other current research lines focus on alignment in dialogue (including the recipient-designed signalling and collaborative construction of meaning, as well as the marking of information status) and on the role of the addressee in achieving mutual understanding and in shaping the speaker’s multimodal utterances. More recently, I have also begun new research lines that compare multimodal communication across different languages and species, as well as applying AI techniques to develop multimodal communication analysis tools, at the same time as applying insights from human-human interaction to interactions between humans and artificial agents.

My research group Communication in Social Interaction (https://cosilab.org) is based at the Max Planck Institute as well as the Donders Institute for Brain, Cognition & Behaviour (https://www.ru.nl/donders/research/theme-1-language-communication/resea…). 

Together with Asli Ozyurek, I also coordinate the Nijmegen Gesture Centre (https://nijmegengesturecentre.wordpress.com/events/).

 

Share this page