The Representation and Computation of Structure (RepCom) group

Our brains turn vibrations in the air (i.e. speech) into complex meaning (i.e. linguistic structures we perceive during language comprehension). Moreover, we can easily transform the complex meanings in our heads back into vibrations in the air (i.e. via language production). On top of all that, we often say and understand things that we have never heard before.

We can do this because human language is compositional; a characteristic which sets it apart from other perception-action systems in the mind and brain, but that makes language difficult to account for within contemporary models of cognition and from a biological systems perspective. We can understand and produce complex meanings through the structure of language, but we know very little about how it actually happens.

The Representation and Computation of Structure (REPCOM) group moves toward unifying a basic insight from linguistic theory - that language is structured - with the currency of neural computation. We attempt to reconcile the powerful core properties of linguistic structure with principles from cognitive psychology, memory, network computation, and neurophysiology in order to develop a theory of how linguistic structure and meaning arise in the mind and brain and underlie both speaking and listening.

 

Members

Andrea E. Martin (research leader)
Phillip Alday
Hans Rutger Bosker
Antje Meyer
Fan Bai (PhD student)
Greta Kaufeld (PhD student)

 

The big questions

In the REPCOM group, we are focused on developing a mechanistic theory of how linguistic structures are represented in language production and comprehension that draws on neurophysiological principles of computation. Few contemporary theories and models of language processing attempt to explain phenomena in both production and comprehension, and fewer still focus on mechanistic models that have neurophysiological and neurobiological plausibility.

In the REPCOM group, we ask questions like:

  1. How do we generate higher-level structures (e.g. phrases and sentences) from component parts (e.g., morphemes and words)?
  2. Which of the mental representations and processing mechanisms that carry out (1) are common to production and comprehension? Which are distinct?
  3. Can the mechanisms involved in language processing be accounted for or decomposed into generalised sub-routines? How might these be realised in a neurophysiological system?
  4. How do finite neural systems like brains achieve the limitless expressive power of human language?
  5. How can we better link neural oscillations to speech and language to the representations that seem to underlie production and comprehension?

 

Research projects

We are currently working on the following projects:

  • How are abstract linguistic units (lexical, grammatical, and semantic knowledge) encoded in brain rhythms during spoken language comprehension?

Greta Kaufeld (PhD student), Hans Rutger Bosker, Andrea E. Martin

  • How do sensory (bottom-up, exogenous) and knowledge-related (top-down, endogenous) signals integrate and trade off during language processing?

Hans Rutger Bosker, Andrea E. Martin

  • How do the "building blocks" of abstract linguistic units (e.g., lexical and prosodic stress) bootstrap higher-level linguistic structures in brain rhythms?

Phillip Alday, Andrea E. Martin

  • How are units of meaning assembled for production and comprehension? What role does statistical learning play?

Fan Bai (PhD student), Andrea E. Martin, Antje Meyer

  • What properties are necessary for theories and models to compute the kinds of structures language requires? How can these systems be realised in the mind and brain?

Andrea E. Martin

  • Can a single computational architecture account for the similarities and differences between speaking and listening? What mechanisms and representations are key in each modality and which differ?

Andrea E. Martin, Antje Meyer

 

How do we conduct our research?

In the REPCOM group we develop cutting-edge methods and use them to tackle our research questions. We primarily use behavioural measures (reaction times, judgments, and eye-movements), computational modelling, and electrophysiology (magnetoencephalography (MEG) and electroencephalography (EEG)) to understand how neural oscillations might underlie both speaking and listening and, specifically, how oscillations might encode the structures and meanings discussed above.

 

External collaborators and former members

External collaborators
Jonathan R. Brennan (University of Michigan)
Leonidas A. A. Doumas (University of Edinburgh)
Patrick Sturt (University of Edinburgh)

Former members
Wibke Naumann (BA intern)
Anna Ravenschlag (MA intern)
Sarah von Grebmer zu Wolfsthurn (MA intern)

TEMPoral Organisation of Speech (TEMPOS)

How is it possible that we can have a proper conversation with someone even if that someone is talking very fast, produces uhm’s all the time, or has to shout over several other talkers in a noisy café? How is it possible that we seem to effortlessly plan and produce words within a millisecond?

Having a simple conversation often seems rather easy, but at closer inspection it takes place under substantial time pressure. Speaking too slowly, too late, or too early can result in disrupted communication. At the same time, listeners have to, for instance, keep track of the speech rate of a given talker, even in noisy acoustic surroundings (e.g., in busy traffic). In this research group, we are interested in how talkers manage to produce the right words at the right time and how listeners are capable of understanding speech produced at different rates and in noisy environments.

 

Members

Hans Rutger Bosker (research leader)
Merel Maslowski (PhD student)
Joe Rodd (PhD student)
Greta Kaufeld (PhD student)
Sophie Slaats (PhD student)
Andrea E. Martin

 

Vision

Speech production takes place under considerable time pressure: speaking too early, too late, or too slowly can seriously disrupt spoken communication. At the same time, speech perception involves the decoding of a fleeting communicative signal with substantial temporal variation. In the TEMPOS group, we investigate how speakers control the temporal encoding of a spoken communicative message (speech planning), and how listeners manage to successfully decode this transitory speech signal in real-time (speech perception). For example, we develop and test computational models of speech planning in an attempt to account for short-term regulation of speech rate. Also, using neuroimaging, psychoacoustics, and perception experiments, we work towards a neurobiologically plausible framework of speech rate normalisation in speech perception.

 

The big questions

The work we do as part of the TEMPOS group contributes to a better understanding of how spoken communication can take place so smoothly. Spoken utterances are timed very carefully but few psycholinguistic models of speech production actually explain how, for instance, talkers regulate their speech rate. Listeners are capable of successfully understanding speech produced at various rates, yet the psycholinguistic and neurobiological mechanisms by which they do so are not well understood. By concurrently examining the temporal encoding (in speech planning) and temporal decoding of speech (in speech perception), this approach also uniquely allows us to study how these two processes (production and perception) interact.

 

Research projects

We are currently working on the following research projects:

  • What are the psychological and neurobiological mechanisms underlying how listeners normalise speech sounds for different speech rates?

Psychological mechanisms: Hans Rutger Bosker, Greta Kaufeld (PhD student), Andrea E. Martin, Eva Reinisch, Matthias Sjerps

Neurobiological mechanisms: Hans Rutger Bosker, Oded Ghitza, Peter Hagoort, Judith Holler, Ole Jensen, Anne Kösem, Ashley Lewis, David Peeters, Lars Riecke

  • What are the psychological control mechanisms that underlie the regulation of speech rate?

Hans Rutger Bosker, Mirjam Ernestus, Antje Meyer, Joe Rodd (PhD student), Louis Ten Bosch

  • How do speech rate perception and speech rate production interact?

Hans Rutger Bosker, Merel Maslowski (PhD student), Antje Meyer

  • What is the role of (enhanced) temporal modulations in speech-in-noise production and perception?

Hans Rutger Bosker, Martin Cooke

  • How do signals that the temporal planning of speech has broken down (e.g., disfluencies) influence speech-induced prediction and lexical activation?

Hans Rutger Bosker, Martin Corley, Geertje Van Bergen

 

How do we conduct our research?

To study speech production, we use speech elicitation paradigms, such as (multiple) picture naming, reading out loud, Lombard tests, etc. We also apply eye-tracking to study the temporal link between planning a word (looking time) and speaking it (speech onset). Furthermore, we develop computationally implemented models of speech planning and test them on empirical data from experiments. To study speech perception, we use speech categorisation experiments with manipulated speech signals (what’s this word?), speech-in-noise intelligibility experiments (what’s this sentence?), and psycholinguistic paradigms such as repetition priming (e.g., lexical decision task). We also use eye-tracking (visual world paradigm) to study the time-course of speech-induced lexical prediction and integration. Finally, much of the perception work within the group is performed within a neurobiological framework, involving the entrainment (phase-locking) of endogenous oscillations in the brain to the slow amplitude modulations in the speech signal. We therefore also use neuroimaging methods (MEG, fMRI, tACS, EEG) and psychoacoustics to uncover the neurobiological mechanisms involved in the temporal decoding of speech, with a particular focus on oscillatory dynamics.

 

Internal and external collaborators

Interns:
Giulio Severijnen (MSc intern), Vilde Reksnes (MSc intern)

External collaborators
Martin Cooke (Ikerbasque, Basque Science Foundation, Bilbao, Spain)
Martin Corley (University of Edinburgh)
Nivja De Jong (Leiden University)
Mirjam Ernestus (Radboud University)
Oded Ghitza (Boston University)
Ole Jensen (University of Birmingham)
Anne Kösem (Lyon University)
Hugo Quené (Utrecht University)
Eva Reinisch (Ludwig Maximilian University Munich)
Lars Riecke (Maastricht University)
Louis Ten Bosch (Radboud University)

Former members
Rik Does, Wibke Naumann, Anna Ravenschlag, Momo Yamamura, Jeonga Kim, Marjolein Van Os, Marie Stadtbäumer, Rebecca Wogan

Juggling Act: Language and Cognitive Processes

This project is a merger of two former clusters: ‘Learning, Memory and Adaptation’ (led by Alastair Smith), which focused on the domain-general cognitive mechanisms in language learning, processing, and production, as well as ‘The Double Act’ (led by Suzanne Jongman), which was concerned with the mechanisms that allow coordination of speaking and listening in conversation. Work in the ‘Juggling Act’ cluster continues both these lines of research of these clusters: We study how speaking and listening are coordinated in conversational contexts, and how interactions with others influence language learning. 

Real-world language combines production with comprehension in conversation; this is supported by a mental juggling act that allows us to ‘perform’ language in a multi-tasking context (listening while preparing to speak, predicting while listening) by recruiting cognitive processes such as executive control and memory. This cluster seeks to uncover how the juggling act of language works. We use a wide variety of tools to do so, including behavioural experiments, EEG,  eye-tracking, and computational modeling.

Members:

Laurel Brehm (Cluster leader)
Antje Meyer
Federica Bartolozzi (PhD student)
Jieying He (PhD student)
Sara Iacozza (PhD student)
Jeroen van Paridon (PhD student)
Limor Raviv (PhD student)
Aitor San Jose (PhD student)
Merel Wolf (PhD student)
Eirini Zormpa (PhD student)

Collaborators:

Sara Bögels
Matt Goldrick
Alexis Hervais-Adelman
Suzanne Jongman
Agnieszka Konopka
Shiri Lev-Ari
Ashley Lewis
Vitória Piai
Ardi Roelofs
Alastair Smith
Zeshu Shao
Amie Fairs

Former members:

Nina Mainz
Linda Taschenberger

Marwa Mekni Toujani

Big questions:

In this cluster, we are looking for answers to the following questions:

  • How are the processes of speech planning and listening related to each other, and how do they differ?
  • What is the role of attention in speaking and listening?
  • What are the constraints on the scheduling of comprehension and production processes in dialogue? Can interlocutors put one process "on hold" to prioritize the other?
  • How do speakers select and sequence the right words in the right structures (and avoid the wrong ones) in order to convey a given message?
  • How do changes in modalities (e.g. speaking vs listening; listening vs reading) affect language learning and memory for language?
  • What are the mechanisms that cause social variables to affect language learning and transmission?

 

Completed dissertations:

Amie Fairs:  Linguistic dual-tasking: Understanding temporal overlap between production and comprehension

Johanne Tromp: Indirect request comprehension in different contexts 

Nina Mainz: Vocabulary knowledge and learning: Individual differences in adult native speakers

Susanne Jongman: Sustained attention in language production

 

 

The Cultural Brain

The Cultural Brain research group, led by Falk Huettig, investigates how cultural inventions – such as written words, numbers, music, and belief systems – shape our mind and brain from the day we are born.

Our research is divided into three themes (the Literate Brain, the Predictive Brain, and the Multimodal Brain), each of which provides us with a unique window for exploring the culturally-shaped mind.

We use behavioural measures, functional and structural neuroimaging techniques, and computational modelling to help us answer the central question: To what extent does culture determine what it means to think as a human?

For more information about our research team and current projects, visit the Cultural Brain research group page.

Individual Differences in Language Processing

The ‘Individual Differences in Language Processing’ (IndividuLa) project is largely funded by the Language in Interaction consortium. Language in Interaction brings together 70 researchers from eight universities and one research institute within the Netherlands to understand the unique capacity of language. The goal of this research program is to account for, and understand the balance between universality and variability at all relevant levels of the language system and the interplay with different cognitive systems, such as memory, action, and cognitive control.

Within the Language in Interaction consortium, IndividuLa is part of the Big Question 4 project—a large effort to map out and understand individual differences in language processing and language learning – led by Antje Meyer and James McQueen.

The goal of IndividuLa is to apply a battery of tests targeting linguistic knowledge (e.g. vocabulary size, grammar rule knowledge), linguistic processing skills (e.g. word production/comprehension, sentence production/comprehension) and general cognitive skills (e.g. processing speed, working memory) to a demographically representative group of 1000 Dutch adults aged between 18 and 30. DNA will be obtained from all participants and used for genome-wide genotyping. About a third of the sample will also participate in neuroimaging studies in order to map the variation in neurobiology across the population.

We will use advanced statistical modelling to derive underlying core dimensions of linguistic ability, to situate each participant in a multidimensional skill space that maps population variation, and determine the manner in which these skills map onto structure and function of underlying brain circuitry.

Integrating our new sample with Nijmegen’s existing Brain Imaging Genetics cohorts, we will carry out focused investigations of genes and biological pathways that have been previously implicated in language ability, test how polygenic scores relate to performance on the task battery, and perform mediation analyses to bridge genes, brains and cognition.

IndividuLa started in January 2017. For the past three years, we have developed and extensively piloted the battery of language and cognitive skills tests in diverse samples of participants. In January 2019, the main study will commence.

Members

Christian Beckmann (Principal investigator)
Marjolijn Dijkhuis (Research assistant)
Simon Fisher (Principal investigator)
Peter Hagoort (Principal investigator)
Florian Hintz (Cluster coordinator)
Vera van ’t Hoff (Research assistant)
Christina Isakoglou (Phd student)
Bob Kapteijns (Research assistant)
Xin Liu (Postdoctoral researcher)
James McQueen (Principal investigator)
Antje Meyer (Principal investigator)
Olha Shkaravska (Programmer)

(External) Collaborators

Marc Brysbaert (Ghent University)
Clyde Francks (Max Planck Institute for Psycholinguistics)
Mante Nieuwland (Max Planck Institute for Psycholinguistics)
Sascha Schroeder (Goettingen University)
Beate St Pourcain (Max Planck Institute for Psycholinguistics)

Research Tools

Materials

Decuyper et al. (in preparation). Bank of Standardized Stimuli (BOSS): Dutch names for 1300 photographs.

Duñabeitia, J. A., Crepaldi, D., Meyer, A. S., New, B., Pliatsikas, C., Smolka, E., & Brysbaert, M. (2018). MultiPic: A standardized set of 750 drawings with norms for six European languages. Quarterly Journal of Experimental Psychology, 71(4), 808-816. doi:10.1080/17470218.2017.1310261.

De Groot, F., Koelewijn, T., Huettig, F., & Olivers, C. N. L. (2016). A stimulus set of words and pictures matched for visual and semantic similarity. Journal of Cognitive Psychology, 28(1), 1-15. doi:10.1080/20445911.2015.1101119.

Shao, Z., Roelofs, A., & Meyer, A. S. (2014). Predicting naming latencies for action pictures: Dutch norms. Behavior Research Methods, 46, 274-283. doi:10.3758/s13428-013-0358-6.

Shao, Z., & Stiegert, J. (2016). Predictors of photo naming: Dutch norms for 327 photos. Behavior Research Methods, 48(2), 577-584. doi:10.3758/s13428-015-0613-0.

Methods

Corcoran, A. W., Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2018). Toward a reliable, automated method of individual alpha frequency (IAF) quantification. Psychophysiology, 55(7): e13064. doi:10.1111/psyp.13064.

Rodd, J., Bosker, H. R., Ten Bosch, L., & Ernestus, M. (2019). Deriving the onset and offset times of planning units from acoustic and articulatory measurements. The Journal of the Acoustical Society of America, 145(2), EL161-EL167. doi:10.1121/1.5089456.

Shao, Z., Janse, E., Visser, K., & Meyer, A. S. (2014). What do verbal fluency tasks measure? Predictors of verbal fluency performance in older adults. Frontiers in Psychology, 5: 772. doi:10.3389/fpsyg.2014.00772.

Shao, Z., & Meyer, A. S. (2018). Word priming and interference paradigms. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 111-129). Hoboken: Wiley.

Veenstra, A., Acheson, D. J., & Meyer, A. S. (2014). Keeping it simple: Studying grammatical encoding with lexically-reduced item sets. Frontiers in Psychology, 5: 783. doi:10.3389/fpsyg.2014.00783.

Annotation Tools

Schillingmann, L., Ernst, J., Keite, V., Wrede, B., Meyer, A. S., & Belke, E. (2018). AlignTool: The automatic temporal alignment of spoken utterances in German, Dutch, and British English for psycholinguistic purposes. Behavior Research Methods, 50(2), 466-489. doi:10.3758/s13428-017-1002-7.

Distributed Annotation System - Joe Rodd (in preparation)

Share this page