Linda Drijvers will defend her thesis on May 13

11 April 2019
On Monday May 13, 2019 at 14:30, Linda Drijvers will defend her thesis entitled “On the oscillatory dynamics underlying speech-gesture integration in clear and adverse listening conditions” in the Aula of Radboud University. As with all defenses, this is a public event, and everyone is welcome to attend.

In face-to-face conversations, we hear people speak and see people making concurrent lip movements and hand gestures. This visual information can make it easier to understand language in adverse listening conditions, such as in a noisy bar, or when you are a non-native speaker of Dutch. How do these lip movements and hand gestures benefit language comprehension when you are a native versus a non-native speaker? And what happens in your brain while you hear and see someone speak in such adverse listening conditions?

By studying the behaviour, eye movements and oscillatory brain activity of participants, Linda Drijvers demonstrated that both native and non-native listeners benefit from visible speech and gestures during degraded speech comprehension. In both groups, this benefit was associated with suppressed oscillatory power in both the alpha and beta band, which reflected engagement of the extended language network, visual and motor regions during this process. These oscillatory mechanisms seemed to be general for native and non-native listeners, and engaging these brain regions is thought to support general unification, simulation and lexical access processes which aid comprehension when speech is degraded.

 However, distinct spatiotemporal time courses of the engagement of these regions, as well as differential eye gaze patterns and behavioral results, suggested that a listener’s processing strategies might differ depending on the context in which a listener understands multimodal language. Non-native listeners might find it more difficult to recognize degraded auditory cues than native listeners. This makes it more difficult for non-native listeners to integrate speech with these visual information sources (i.e., lip movements and gestures) that can aid comprehension, and was associated with less engagement of brain areas that are involved in unification, simulation and lexical access during comprehension. These results thus demonstrated that there might be an additive effect on integration when adverse listening conditions are caused by an internal factor, such as being a non-native listener, and an external factor, such as speech degradation.

Share this page