The Representation and Computation of Structure (RepCom) group
Our brains turn vibrations in the air (i.e. speech) into complex meaning (i.e. linguistic structures we perceive during language comprehension). Moreover, we can easily transform the complex meanings in our heads back into vibrations in the air (i.e. via language production). On top of all that, we often say and understand things that we have never heard before.
We can do this because human language is compositional; a characteristic which sets it apart from other perception-action systems in the mind and brain, but that makes language difficult to account for within contemporary models of cognition and from a biological systems perspective. We can understand and produce complex meanings through the structure of language, but we know very little about how it actually happens.
The Representation and Computation of Structure (REPCOM) group moves toward unifying a basic insight from linguistic theory - that language is structured - with the currency of neural computation. We attempt to reconcile the powerful core properties of linguistic structure with principles from cognitive psychology, memory, network computation, and neurophysiology in order to develop a theory of how linguistic structure and meaning arise in the mind and brain and underlie both speaking and listening.
The big questions
In the REPCOM group, we are focused on developing a mechanistic theory of how linguistic structures are represented in language production and comprehension that draws on neurophysiological principles of computation. Few contemporary theories and models of language processing attempt to explain phenomena in both production and comprehension, and fewer still focus on mechanistic models that have neurophysiological and neurobiological plausibility.
In the REPCOM group, we ask questions like:
- How do we generate higher-level structures (e.g. phrases and sentences) from component parts (e.g., morphemes and words)?
- Which of the mental representations and processing mechanisms that carry out (1) are common to production and comprehension? Which are distinct?
- Can the mechanisms involved in language processing be accounted for or decomposed into generalised sub-routines? How might these be realised in a neurophysiological system?
- How do finite neural systems like brains achieve the limitless expressive power of human language?
- How can we better link neural oscillations to speech and language to the representations that seem to underlie production and comprehension?
We are currently working on the following projects:
- How are abstract linguistic units (lexical, grammatical, and semantic knowledge) encoded in brain rhythms during spoken language comprehension?
Greta Kaufeld (PhD student), Hans Rutger Bosker, Andrea E. Martin
- How do sensory (bottom-up, exogenous) and knowledge-related (top-down, endogenous) signals integrate and trade off during language processing?
Hans Rutger Bosker, Andrea E. Martin
- How do the "building blocks" of abstract linguistic units (e.g., lexical and prosodic stress) bootstrap higher-level linguistic structures in brain rhythms?
Phillip Alday, Andrea E. Martin
- How are units of meaning assembled for production and comprehension? What role does statistical learning play?
Fan Bai (PhD student), Andrea E. Martin, Antje Meyer
- What properties are necessary for theories and models to compute the kinds of structures language requires? How can these systems be realised in the mind and brain?
Andrea E. Martin
- Can a single computational architecture account for the similarities and differences between speaking and listening? What mechanisms and representations are key in each modality and which differ?
Andrea E. Martin, Antje Meyer
How do we conduct our research?
In the REPCOM group we develop cutting-edge methods and use them to tackle our research questions. We primarily use behavioural measures (reaction times, judgments, and eye-movements), computational modelling, and electrophysiology (magnetoencephalography (MEG) and electroencephalography (EEG)) to understand how neural oscillations might underlie both speaking and listening and, specifically, how oscillations might encode the structures and meanings discussed above.
External collaborators and former members
Jonathan R. Brennan (University of Michigan)
Leonidas A. A. Doumas (University of Edinburgh)
Patrick Sturt (University of Edinburgh)
Wibke Naumann (BA intern)
Anna Ravenschlag (MA intern)
Sarah von Grebmer zu Wolfsthurn (MA intern)
- TEMPoral Organisation of Speech (TEMPOS)
How is it possible that we can have a proper conversation with someone even if that someone is talking very fast, produces uhm’s all the time, or has to shout over several other talkers in a noisy café? How is it possible that we seem to effortlessly plan and produce words within a millisecond?
Having a simple conversation often seems rather easy, but at closer inspection it takes place under substantial time pressure. Speaking too slowly, too late, or too early can result in disrupted communication. At the same time, listeners have to, for instance, keep track of the speech rate of a given talker, even in noisy acoustic surroundings (e.g., in busy traffic). In this research group, we are interested in how talkers manage to produce the right words at the right time and how listeners are capable of understanding speech produced at different rates and in noisy environments.
Speech production takes place under considerable time pressure: speaking too early, too late, or too slowly can seriously disrupt spoken communication. At the same time, speech perception involves the decoding of a fleeting communicative signal with substantial temporal variation. In the TEMPOS group, we investigate how speakers control the temporal encoding of a spoken communicative message (speech planning), and how listeners manage to successfully decode this transitory speech signal in real-time (speech perception). For example, we develop and test computational models of speech planning in an attempt to account for short-term regulation of speech rate. Also, using neuroimaging, psychoacoustics, and perception experiments, we work towards a neurobiologically plausible framework of speech rate normalisation in speech perception.
The big questions
The work we do as part of the TEMPOS group contributes to a better understanding of how spoken communication can take place so smoothly. Spoken utterances are timed very carefully but few psycholinguistic models of speech production actually explain how, for instance, talkers regulate their speech rate. Listeners are capable of successfully understanding speech produced at various rates, yet the psycholinguistic and neurobiological mechanisms by which they do so are not well understood. By concurrently examining the temporal encoding (in speech planning) and temporal decoding of speech (in speech perception), this approach also uniquely allows us to study how these two processes (production and perception) interact.
We are currently working on the following research projects:
- What are the psychological and neurobiological mechanisms underlying how listeners normalise speech sounds for different speech rates?
Psychological mechanisms: Hans Rutger Bosker, Greta Kaufeld (PhD student), Andrea E. Martin, Eva Reinisch, Matthias Sjerps
Neurobiological mechanisms: Hans Rutger Bosker, Oded Ghitza, Peter Hagoort, Judith Holler, Ole Jensen, Anne Kösem, Ashley Lewis, David Peeters, Lars Riecke
- What are the psychological control mechanisms that underlie the regulation of speech rate?
Hans Rutger Bosker, Mirjam Ernestus, Antje Meyer, Joe Rodd (PhD student), Louis Ten Bosch
- How do speech rate perception and speech rate production interact?
Hans Rutger Bosker, Merel Maslowski (PhD student), Antje Meyer
- What is the role of (enhanced) temporal modulations in speech-in-noise production and perception?
Hans Rutger Bosker, Martin Cooke
- How do signals that the temporal planning of speech has broken down (e.g., disfluencies) influence speech-induced prediction and lexical activation?
Hans Rutger Bosker, Martin Corley, Geertje Van Bergen
How do we conduct our research?
To study speech production, we use speech elicitation paradigms, such as (multiple) picture naming, reading out loud, Lombard tests, etc. We also apply eye-tracking to study the temporal link between planning a word (looking time) and speaking it (speech onset). Furthermore, we develop computationally implemented models of speech planning and test them on empirical data from experiments. To study speech perception, we use speech categorisation experiments with manipulated speech signals (what’s this word?), speech-in-noise intelligibility experiments (what’s this sentence?), and psycholinguistic paradigms such as repetition priming (e.g., lexical decision task). We also use eye-tracking (visual world paradigm) to study the time-course of speech-induced lexical prediction and integration. Finally, much of the perception work within the group is performed within a neurobiological framework, involving the entrainment (phase-locking) of endogenous oscillations in the brain to the slow amplitude modulations in the speech signal. We therefore also use neuroimaging methods (MEG, fMRI, tACS, EEG) and psychoacoustics to uncover the neurobiological mechanisms involved in the temporal decoding of speech, with a particular focus on oscillatory dynamics.
Internal and external collaborators
Giulio Severijnen (MSc intern), Vilde Reksnes (MSc intern)
Martin Cooke (Ikerbasque, Basque Science Foundation, Bilbao, Spain)
Martin Corley (University of Edinburgh)
Nivja De Jong (Leiden University)
Mirjam Ernestus (Radboud University)
Oded Ghitza (Boston University)
Ole Jensen (University of Birmingham)
Anne Kösem (Lyon University)
Hugo Quené (Utrecht University)
Eva Reinisch (Ludwig Maximilian University Munich)
Lars Riecke (Maastricht University)
Louis Ten Bosch (Radboud University)
Rik Does, Wibke Naumann, Anna Ravenschlag, Momo Yamamura, Jeonga Kim, Marjolein Van Os, Marie Stadtbäumer, Rebecca Wogan
- Juggling Act: Language and Cognitive Processes
Real-world language combines production with comprehension in conversation; this is supported by a mental juggling act that allows us to ‘perform’ language in a multi-tasking context (listening while preparing to speak, predicting while listening) by recruiting cognitive processes such as executive control and memory. This cluster seeks to uncover how the juggling act of language works: We study how speaking and listening are coordinated in conversational contexts, and how speaking and listening are supported by domain-general cognitive mechanisms. We use a wide variety of tools to do so, including behavioural experiments, EEG, eye-tracking, and computational modeling.
Laurel Brehm (Cluster leader)
Federica Bartolozzi (PhD student)
Caitlin Decuyper (PhD student)
Jieying He (PhD student)
Jeroen van Paridon (PhD student)
Aitor San Jose (PhD student)
Eirini Zormpa (PhD student)
Marwa Mekni Toujani
In this cluster, we are looking for answers to the following questions:
- How are the processes of speech planning and listening related to each other, and how do they differ?
- What is the role of attention in speaking and listening?
- What are the constraints on the scheduling of comprehension and production processes in dialogue? Can interlocutors put one process "on hold" to prioritize the other?
- What cognitive abilities do speakers draw on to select and sequence the right words in the right structures (and avoid the wrong ones)?
- How do changes in modalities (e.g. speaking vs listening; listening vs reading) affect language learning and memory for language?
Sara Iacozza: Exploring social biases in language processing
Limor Raviv: Language and society: How social pressures shape grammatical structure
Amie Fairs: Linguistic dual-tasking: Understanding temporal overlap between production and comprehension
Johanne Tromp: Indirect request comprehension in different contexts
Nina Mainz: Vocabulary knowledge and learning: Individual differences in adult native speakers
Susanne Jongman: Sustained attention in language production
- The Cultural Brain
The Cultural Brain research group, led by Falk Huettig, investigates how cultural inventions – such as written words, numbers, music, and belief systems – shape our mind and brain from the day we are born.
Our research is divided into three themes (the Literate Brain, the Predictive Brain, and the Multimodal Brain), each of which provides us with a unique window for exploring the culturally-shaped mind.
We use behavioural measures, functional and structural neuroimaging techniques, and computational modelling to help us answer the central question: To what extent does culture determine what it means to think as a human?
For more information about our research team and current projects, visit the Cultural Brain research group page.
- Individual Differences in Language Processing
The ‘Individual Differences in Language Processing’ (IndividuLa) project is largely funded by the Language in Interaction consortium. Language in Interaction brings together 70 researchers from eight universities and one research institute within the Netherlands to understand the unique capacity of language. The goal of this research program is to account for, and understand the balance between universality and variability at all relevant levels of the language system and the interplay with different cognitive systems, such as memory, action, and cognitive control.
Within the Language in Interaction consortium, IndividuLa is part of the Big Question 4 project—a large effort to map out and understand individual differences in language processing and language learning – led by Antje Meyer and James McQueen.
The goal of IndividuLa is to apply a battery of tests targeting linguistic knowledge (e.g. vocabulary size, grammar rule knowledge), linguistic processing skills (e.g. word production/comprehension, sentence production/comprehension) and general cognitive skills (e.g. processing speed, working memory) to a demographically representative group of 1000 Dutch adults aged between 18 and 30. DNA will be obtained from all participants and used for genome-wide genotyping. About a third of the sample will also participate in neuroimaging studies in order to map the variation in neurobiology across the population.
We will use advanced statistical modelling to derive underlying core dimensions of linguistic ability, to situate each participant in a multidimensional skill space that maps population variation, and determine the manner in which these skills map onto structure and function of underlying brain circuitry.
Integrating our new sample with Nijmegen’s existing Brain Imaging Genetics cohorts, we will carry out focused investigations of genes and biological pathways that have been previously implicated in language ability, test how polygenic scores relate to performance on the task battery, and perform mediation analyses to bridge genes, brains and cognition.
IndividuLa started in January 2017. For the past three years, we have developed and extensively piloted the battery of language and cognitive skills tests in diverse samples of participants. In January 2019, the main study will commence.
Christian Beckmann (Principal investigator)
Marjolijn Dijkhuis (Research assistant)
Simon Fisher (Principal investigator)
Peter Hagoort (Principal investigator)
Florian Hintz (Cluster coordinator)
Vera van ’t Hoff (Research assistant)
Christina Isakoglou (Phd student)
Bob Kapteijns (Research assistant)
Xin Liu (Postdoctoral researcher)
James McQueen (Principal investigator)
Antje Meyer (Principal investigator)
Olha Shkaravska (Programmer)
Marc Brysbaert (Ghent University)
Clyde Francks (Max Planck Institute for Psycholinguistics)
Mante Nieuwland (Max Planck Institute for Psycholinguistics)
Sascha Schroeder (Goettingen University)
Beate St Pourcain (Max Planck Institute for Psycholinguistics)
- Research Tools