I completed a bachelor's degree in Dutch Language & Literature (cum laude, 2012), a research master's degree in Cognitive Neuroscience (cum laude, 2014), and a PhD investigating the oscillatory dynamics underlying speech-gesture integration in clear and adverse listening conditions (cum laude, 2019, supervised by prof. Asli Ozyurek & prof. Ole Jensen (University of Birmingham)). I then worked as a post-doc with dr. Judith Holler at the Donders Institute.
From January 2021 onwards, I will be leading the Communicative Brain group, as part of a Minerva Fast Track Fellowship, awarded by the MPG and the Max Planck Institute for Psycholinguistics. My group is part of the Neurobiology of Language department.
I'm interested in how the brain combines what you see and what you hear during natural, face-to-face communication.
Face-to-face communication often consists of an audio-visual binding between auditory input and visual input, such as visible speech and co-speech gestures. These visual signals can help a listener to understand speech in adverse listening conditions, such as in noise, or when you are a non-native listener of a language.
I am interested in the cognitive and neural mechanisms that underlie such multimodal comprehension and production processes. For example, does multimodal language facilitate a listener’s predictions of upcoming speech, and therefore facilitate language production? Is our brain ‘hard-wired’ for processing multimodal language in a natural, face-to-face context?
I use behavioral methods and eye-tracking to study the cognitive underpinnings of these phenomena, and use magnetoencephalography (MEG),electroencephalography (EEG), and rapid invisible frequency tagging (RIFT) to investigate the neural oscillatory dynamics that support these processes.
Specifically, I am now using a dual-EEG approach to study how oscillatory dynamics support in situ multimodal interaction, and whether natural, face-to-face communication induces a ‘special mode’ for processing communicative messages. Second, I use rapid invisible frequency tagging to study how listeners allocate their attention to different auditory and visual signals in natural, face-to-face conversations.
I’m also passionate about science communication. Please see KNAW’s Faces of Science for blogs/videos on my research, and follow ScienceBattle's theater schedule to see me defend why I think studying multimodal communication is so important.