During face-to-face communication, language users need to integrate a myriad of signals to make sense of each other. Currently it remains an unresolved problem how language users integrate these signals into a coherent message and communicate with each other in complex, natural face-to-face settings.
The goal of the Communicative Brain group, led by Linda Drijvers, is to understand how the brain integrates auditory and visual signals into a coherent message during multimodal, face-to-face conversations. The core theory we want to test is whether and how oscillatory neural activity plays a role in integrating these different sources of information within and between conversational partners.
One of the questions we aim to answer within our group is how it is possible that conversational partners align with each other while they are communicating. One proposal would be that this alignment happens because intrinsic brain rhythms are operating in sync. Synchronisation might facilitate the binding of auditory and visual signals within one brain, and synchronizing brain waves between brains might facilitate alignment, mutual intelligibility and joint attention. However, it is currently unknown whether synchronization is sufficient or even required for communication.
Another question we are interested in is how listeners dynamically shift their attention to different signals and weigh their reliability during natural conversations. Simply put: How do listeners decide what is relevant when?
We will investigate these questions using new cutting-edge techniques, including dual-EEG, MEG, rapid invisible frequency tagging, multi-brain stimulation, and detailed behavioural analyses of auditory and visual signals in interactive contexts.
The Communicative Brain group is funded by a Minerva Fast Track Fellowship, awarded by the MPG, as well as the Max Planck Institute for Psycholinguistics. The group is part of the Neurobiology of Language department.