Neurobiology of Language -
New Post-Doc Staff
From top left to bottom right: Joost, Matthias, Karin, Basil, and Evelien
Language arrives in the form of a rapid input stream: we read at speeds of around 250 words per minute and listen to more than two hundred syllables per minute. So a key question in the language sciences is: how do readers and listeners keep up with this speed? Much of my current research investigates how this is related to the brain’s ability to think ahead and generate predictions about upcoming input. In the Neurobiology of Language Department, I plan to use EEG, MEG and neuropsychology to address questions about the mechanisms and consequences of predictive language comprehension. In the long run, the results could have implications for learning and education. In addition, I have interests in language production and in memory, and have examined the role of visual representations in language, the comprehension of idiomatic expressions, and the allocation of attention during speaking.
I am an NWO Veni fellow at the Donders Institute for Brain, Cognition and Behaviour. Before coming to the Donders, I did a PhD and a postdoc at the Max Planck Institute for Psycholinguistics and a postdoc at the University of Illinois.
A very common, if not the most common, form of human interaction is through spoken speech. When we listen to each other in conversation, the process of recognizing speech sounds and combining them into words feels trivially simple. In contrast to this intuition, the task of speech perception is actually incredibly complex in a computational sense. This is because speech sounds vary due to differences between speakers, background noise, and influences of surrounding speech sounds (coarticulation). My main line of research investigates how listeners retrieve the intended speech sounds from this variable input, and especially how they rely on context to resolve this problem. In a second line of research I investigate how people allocate cognitive resources in situations of turn-taking. I mostly address these questions with behavioral experiments, eye-tracking and electrocorticography (ECoG).
I obtained my PhD from the Max Planck Institute and subsequently spent three years as a postdoctoral researcher at the MPI as well. I am currently working in the Neurobiology of Language department on a three-year Marie Curie grant. During the first two years of the grant I collected speech-related ECoG data at UC Berkeley and UC San Francisco. My main focus now is to analyse these data sets, and develop new projects on related topics
My core research interests lie in the interaction between language and control processes and the activation dynamics and plasticity in their neural underpinnings. Verbal interaction is a highly dynamic process taking place between two or more individuals, which engages language processing, but also attention and executive control as well as mentalizing. Some technical advancements have recently made it possible to study the brain activity of interacting individuals. In my postdoctoral research project at the NBL department, one of the main goals is to try to better understand the neural underpinnings of language and control processes that are involved when information is exchanged in verbal interaction.
Contemporary models of language processing assume that the human brain segments the continuous speech signal into meaningful syllables and words. Speech parsing seems to rely on the temporal alignment between neural oscillations and the rhythmic structure of the speech signal. During speech processing neural oscillations become entrained in the gamma and theta frequency bands which correspond to the phoneme and the syllable length. Previous research suggests that neural entrainment in the two frequency bands may be essential for the translation of smallest units of speech (phonemes) into meaningful phonological blocks (syllables). The aim of the present project is to test neurophysiological correlates and their function during language processing by means of frequency-specific neuromodulation (i.e., transcranial alternating current stimulation, tACS) influencing neural entrainment of language associated brain oscillations in the gamma and theta frequency band. It is assumed that pre-linguistic processing relies on the phonemic brain oscillations corresponding to gamma frequency band. At the same time, language processing within the language network may rely more strongly on the syllabic frequency domains (i.e., theta frequency band). This hypothesis will be examined in three different studies where we selectively interfere with neural entrainment in the syllabic and phonemic frequency band. The first study assesses whether the mapping of speech sounds on their articulatory representation within the language network relies on theta phase coupling. The second study examines whether interhemispheric asymmetry in pre-linguistic processing of consonant vowel syllables can be modulated by gamma frequency stimulation. Finally, the third study targets the neurophysiological effects of transcranial alternating current stimulation during linguistic processing in a silent verbal repetition and verbal production task.
The visual world paradigm has been pivotal in providing evidence for the role of prediction in language comprehension (e.g., Altmann & Kamide, 1999). The ecological validity of this paradigm is questionable, however, as participants are usually presented with stimuli that do not reflect the visual and/or auditory richness of everyday communication. Moreover, studies providing evidence for prediction have used highly predictive stimuli, such that the participant is implicitly encouraged to use prediction to complete the task. What visual world studies might show therefore is what listeners can do, not what they actually do in everyday communication in visual environments. For my post-doctoral position I will use Virtual Reality to determine if language prediction as has been observed in controlled experimental conditions also occurs in ecologically valid, rich, immersive scenarios.