Audiovisual integration through low-frequency phase synchronization between regions

This project focuses on the neurobiological basis of how listeners integrate auditory and visual signals and how listeners distribute their attention to these auditory and visual signals during communication. In a first study, we’re using rapid invisible frequency tagging (RIFT) in combination with MEG to study the role of low-frequency neural oscillations in this process. RIFT is a promising tool to generate steady-state evoked responses at high frequencies. An important advantage of using RIFT is that spontaneous neuronal oscillations in lower frequencies are not entrained by the tagging frequencies. Potentially because of this, previous studies have been able to study the non-linear interactions between auditory and visual inputs in the brain and observed clear intermodulation frequencies (Drijvers, Jensen, Spaak, 2020). In this study, we exploit this advantage and use RIFT to investigate whether the integration and interaction of audiovisual information might be established by low-frequency phase synchronization between regions. Moreover, we will test whether and how attention modulates the integration of auditory and visual signals in conversations to answer the question of how listeners decide what is relevant when.

The role of intra- and inter-brain neural synchrony in face-to-face communication

Synchronized neural oscillations are thought to be relevant for selecting and transferring information within and between cortical areas. In particular, neural synchrony seems to facilitate the integration of stimuli coming from different sensory modalities. In the recent years, neural synchrony has not only been investigated within a brain (intra-brain synchrony), but also between brains (inter-brain synchrony) while individuals engage in social interactions.

During face-to-face communication, auditory and visual information (i.e. speech and gestures) need to be integrated into a meaningful message and this process is likely driven by neural synchrony between the cortical areas involved. Nonetheless, the role of neural synchrony during natural conversation is not clear yet. Namely, it is still unknown whether intra- and inter-brain synchrony are necessary and/or beneficial for effective face-to-face communication. Within this project, we will the hypothesis whether neural synchrony plays a mechanistic role in integrating different sources of information within and between conversational partners. More specifically, we aim to unravel whether  intra- and inter-brain neural synchrony is necessary, or even required for, successful communication. In a later stage, non-invasive multibrain stimulation (tACS) will be used to externally manipulate endogenous neural synchrony that may be relevant for successful communication, to measure its causal effect on face-to-face communication in both native and non-native language contexts.

Speech intelligibility in noisy environments

People are almost always in the presence of background noise, which can affect the intelligibility of speech in face-to-face conversations. We study what factors in background noise makes speech more or less intelligible. Moreover, we study what role iconic hand gestures play for speech intelligibility in noisy environments. We are working on a project in which we test what the effects of native and foreign language background babble and iconic co-speech gestures are on speech intelligibility, to better understand what makes conversation easier or harder.

Share this page