The Representation and Computation of Structure (RepCom) group

Our brains turn vibrations in the air (i.e. speech) into complex meaning (i.e. linguistic structures we perceive during language comprehension). Moreover, we can easily transform the complex meanings in our heads back into vibrations in the air (i.e. via language production). On top of all that, we often say and understand things that we have never heard before.

We can do this because human language is compositional; a characteristic which sets it apart from other perception-action systems in the mind and brain, but that makes language difficult to account for within contemporary models of cognition and from a biological systems perspective. We can understand and produce complex meanings through the structure of language, but we know very little about how it actually happens.

The Representation and Computation of Structure (REPCOM) group moves toward unifying a basic insight from linguistic theory - that language is structured - with the currency of neural computation. We attempt to reconcile the powerful core properties of linguistic structure with principles from cognitive psychology, memory, network computation, and neurophysiology in order to develop a theory of how linguistic structure and meaning arise in the mind and brain and underlie both speaking and listening.

 

Members

Andrea E. Martin (research leader)
Phillip Alday
Hans Rutger Bosker
Antje Meyer
Fan Bai (PhD student)
Greta Kaufeld (PhD student)

 

The big questions

In the REPCOM group, we are focused on developing a mechanistic theory of how linguistic structures are represented in language production and comprehension that draws on neurophysiological principles of computation. Few contemporary theories and models of language processing attempt to explain phenomena in both production and comprehension, and fewer still focus on mechanistic models that have neurophysiological and neurobiological plausibility.

In the REPCOM group, we ask questions like:

  1. How do we generate higher-level structures (e.g. phrases and sentences) from component parts (e.g., morphemes and words)?
  2. Which of the mental representations and processing mechanisms that carry out (1) are common to production and comprehension? Which are distinct?
  3. Can the mechanisms involved in language processing be accounted for or decomposed into generalised sub-routines? How might these be realised in a neurophysiological system?
  4. How do finite neural systems like brains achieve the limitless expressive power of human language?
  5. How can we better link neural oscillations to speech and language to the representations that seem to underlie production and comprehension?

 

Research projects

We are currently working on the following projects:

  • How are abstract linguistic units (lexical, grammatical, and semantic knowledge) encoded in brain rhythms during spoken language comprehension?

Greta Kaufeld (PhD student), Hans Rutger Bosker, Andrea E. Martin

  • How do sensory (bottom-up, exogenous) and knowledge-related (top-down, endogenous) signals integrate and trade off during language processing?

Hans Rutger Bosker, Andrea E. Martin

  • How do the "building blocks" of abstract linguistic units (e.g., lexical and prosodic stress) bootstrap higher-level linguistic structures in brain rhythms?

Phillip Alday, Andrea E. Martin

  • How are units of meaning assembled for production and comprehension? What role does statistical learning play?

Fan Bai (PhD student), Andrea E. Martin, Antje Meyer

  • What properties are necessary for theories and models to compute the kinds of structures language requires? How can these systems be realised in the mind and brain?

Andrea E. Martin

  • Can a single computational architecture account for the similarities and differences between speaking and listening? What mechanisms and representations are key in each modality and which differ?

Andrea E. Martin, Antje Meyer

 

How do we conduct our research?

In the REPCOM group we develop cutting-edge methods and use them to tackle our research questions. We primarily use behavioural measures (reaction times, judgments, and eye-movements), computational modelling, and electrophysiology (magnetoencephalography (MEG) and electroencephalography (EEG)) to understand how neural oscillations might underlie both speaking and listening and, specifically, how oscillations might encode the structures and meanings discussed above.

 

External collaborators and former members

External collaborators
Jonathan R. Brennan (University of Michigan)
Leonidas A. A. Doumas (University of Edinburgh)
Patrick Sturt (University of Edinburgh)

Former members
Wibke Naumann (BA intern)
Anna Ravenschlag (MA intern)
Sarah von Grebmer zu Wolfsthurn (MA intern)

TEMPoral Organisation of Speech (TEMPOS)

How is it possible that we can have a proper conversation with someone even if that someone is talking very fast, produces uhm’s all the time, or has to shout over several other talkers in a noisy café? How is it possible that we seem to effortlessly plan and produce words within a millisecond?

Having a simple conversation often seems rather easy, but at closer inspection it takes place under substantial time pressure. Speaking too slowly, too late, or too early can result in disrupted communication. At the same time, listeners have to, for instance, keep track of the speech rate of a given talker, even in noisy acoustic surroundings (e.g., in busy traffic). In this research group, we are interested in how talkers manage to produce the right words at the right time and how listeners are capable of understanding speech produced at different rates and in noisy environments.

 

Members

Hans Rutger Bosker (research leader)
Merel Maslowski (PhD student)
Joe Rodd (PhD student)
Greta Kaufeld (PhD student)
Andrea E. Martin

 

Vision

Speech production takes place under considerable time pressure: speaking too early, too late, or too slowly can seriously disrupt spoken communication. At the same time, speech perception involves the decoding of a fleeting communicative signal with substantial temporal variation. In the TEMPOS group, we investigate how speakers control the temporal encoding of a spoken communicative message (speech planning), and how listeners manage to successfully decode this transitory speech signal in real-time (speech perception). For example, we develop and test computational models of speech planning in an attempt to account for short-term regulation of speech rate. Also, using neuroimaging, psychoacoustics, and perception experiments, we work towards a neurobiologically plausible framework of speech rate normalisation in speech perception.

 

The big questions

The work we do as part of the TEMPOS group contributes to a better understanding of how spoken communication can take place so smoothly. Spoken utterances are timed very carefully but few psycholinguistic models of speech production actually explain how, for instance, talkers regulate their speech rate. Listeners are capable of successfully understanding speech produced at various rates, yet the psycholinguistic and neurobiological mechanisms by which they do so are not well understood. By concurrently examining the temporal encoding (in speech planning) and temporal decoding of speech (in speech perception), this approach also uniquely allows us to study how these two processes (production and perception) interact.

 

Research projects

We are currently working on the following research projects:

  • What are the psychological and neurobiological mechanisms underlying how listeners normalise speech sounds for different speech rates?

Psychological mechanisms: Hans Rutger Bosker, Greta Kaufeld (PhD student), Andrea E. Martin, Eva Reinisch, Matthias Sjerps

Neurobiological mechanisms: Hans Rutger Bosker, Oded Ghitza, Peter Hagoort, Judith Holler, Ole Jensen, Anne Kösem, Ashley Lewis, David Peeters, Lars Riecke

  • What are the psychological control mechanisms that underlie the regulation of speech rate?

Hans Rutger Bosker, Mirjam Ernestus, Antje Meyer, Joe Rodd (PhD student), Louis Ten Bosch

  • How do speech rate perception and speech rate production interact?

Hans Rutger Bosker, Merel Maslowski (PhD student), Antje Meyer

  • What is the role of (enhanced) temporal modulations in speech-in-noise production and perception?

Hans Rutger Bosker, Martin Cooke

  • How do signals that the temporal planning of speech has broken down (e.g., disfluencies) influence speech-induced prediction and lexical activation?

Hans Rutger Bosker, Martin Corley, Geertje Van Bergen

 

How do we conduct our research?

To study speech production, we use speech elicitation paradigms, such as (multiple) picture naming, reading out loud, Lombard tests, etc. We also apply eye-tracking to study the temporal link between planning a word (looking time) and speaking it (speech onset). Furthermore, we develop computationally implemented models of speech planning and test them on empirical data from experiments. To study speech perception, we use speech categorisation experiments with manipulated speech signals (what’s this word?), speech-in-noise intelligibility experiments (what’s this sentence?), and psycholinguistic paradigms such as repetition priming (e.g., lexical decision task). We also use eye-tracking (visual world paradigm) to study the time-course of speech-induced lexical prediction and integration. Finally, much of the perception work within the group is performed within a neurobiological framework, involving the entrainment (phase-locking) of endogenous oscillations in the brain to the slow amplitude modulations in the speech signal. We therefore also use neuroimaging methods (MEG, fMRI, tACS, EEG) and psychoacoustics to uncover the neurobiological mechanisms involved in the temporal decoding of speech, with a particular focus on oscillatory dynamics.

 

Internal and external collaborators

Interns:
Rebecca Wogan (MSc intern), Giulio Severijnen (MSc intern)

External collaborators
Martin Cooke (Ikerbasque, Basque Science Foundation, Bilbao, Spain)
Martin Corley (University of Edinburgh)
Nivja De Jong (Leiden University)
Mirjam Ernestus (Radboud University)
Oded Ghitza (Boston University)
Ole Jensen (University of Birmingham)
Anne Kösem (Lyon University)
Hugo Quené (Utrecht University)
Eva Reinisch (Ludwig Maximilian University Munich)
Lars Riecke (Maastricht University)
Louis Ten Bosch (Radboud University)

Former members
Rik Does, Wibke Naumann, Anna Ravenschlag, Momo Yamamura, Jeonga Kim, Marjolein Van Os, Marie Stadtbäumer

The Double-Act: Speaking and Listening

We mainly use language to talk to other people and, as such, a lot of real-world language use involves the coordination of production and comprehension. Talking and understanding what others say seems easy, but we know that both speaking and listening require not only knowledge of the language but also attention and executive control.

In this cluster, we examine the processes of speech planning and listening. The key issues that interest us are: how speakers and listeners retrieve words from their mental lexicon, how they combine them into larger units, and how these processes are supported by executive control processes. At the same time, we also work on another broad issue, namely the many ways interlocutors affect each other in conversation.
Looking at these issues together leads to new insights into the properties of the linguistic representations and mechanisms underlying speaking and listening and into the skills people apply when they use language in the lab and in everyday contexts.

 

Members

Suzanne Jongman
Laurel Brehm
Amie Fairs
Federica Bartolozzi
Jeroen van Paridon
Aitor San Jose
Jieying He
Antje Meyer

 

The big questions

In this cluster, we are trying to find answers to the following questions:

 

  • How do speakers combine words into phrases and sentences? How do they select and sequence the right words in the right structures in order to convey a given message?
  • How do listeners recover the intended meaning from spoken utterances?
  • How are the processes of speech planning and listening related to each other, and how do they differ?
  • What is the role of executive control processes in speaking and listening?
  • What are the constraints on the scheduling of comprehension and production processes in dialogue?
  • Can interlocutors put one process "on hold" to prioritise the other?

 

Research projects

We are currently working on the following research projects:

 

  • How do listeners comprehend sentences? How do they cope with errors and ambiguities?

Laurel Brehm

  • What happens in the brain when we plan utterances while listening to others?

Suzanne Jongman

  • Does combining production and comprehension affect implicit memory?

Federica Bartolozzi, Suzanne Jongman, Antje Meyer

  • Can word forms be simultaneously accessed for speaking and for listening?

Amie Fairs, Antje Meyer

  • How are listening and speech planning coordinated in simultaneous interpretation and shadowing?

Jeroen van Paridon

  • How do interlocutors coordinate their utterances in time? Is simulation of the partner's speech planning necessary to achieve temporal precision?

Laurel Brehm, Antje Meyer

  • When and how do listeners predict the end of turns in conversation?

Jieying He, Laurel Brehm, Antje Meyer

  • When and how do speakers exert top-down control when producing language?

Aitor San José, Ardi Roelofs, Antje Meyer

 

How do we conduct our research?

We use classic psycholinguistic tools such as object naming, action and event descriptions or categorisation, self-paced reading and sentence judgement tasks. To capture where listeners and speakers direct their attention we often use eye-tracking. For instance, we have developed a dual-eye-tracking setup where we can track two conversation partners simultaneously and study how they synchronise their words and their gaze while conversing.

In many studies, we combine behavioural with neurobiological measures (especially recording of EEG), for example, to assess the engagement of attentional networks in different tasks. Finally, we use computational and statistical modelling to test complex hypotheses and generate predictions for future experimental work.

 

External collaborators and former members

External collaborators
Ardi Roelofs
Angieszka Konopka
Zeshu Shao
Vitória Piai
Sara Bögels
Alexis Hervais-Adelman

Former members
Marwa Mekni Toujani, Linda Taschenberger

Learning, Memory & Adaptation

As language users we must minimally learn, store, and retrieve word forms and the concepts they represent. Within our group, we assume that the core mechanisms and architecture for learning, storing and retrieving language are universal across language users, but different constraints on when or how we use or experience language leads to differences in learning, storage and the structure of language itself, i.e. how language evolves (or adapts). With our research, we aim to understand the relationship between such constraints and the effects they generate.

 

Members

Alastair Smith (Post-doctoral researcher)
Laurel Brehm (Post-doctoral researcher)
Sara Iacozza (PhD student)
Nina Mainz (PhD student)
Limor Raviv (PhD student)
Merel Wolf (PhD student)
Eirini Zormpa (PhD student)

 

The big questions

The following questions are at the heart of our research:

 

  • What are the core cognitive mechanisms and architecture that support language learning, storage and retrieval?
  • How do such mechanisms interact with communicative constraints to shape linguistic structure?
  • How and why do common properties of language use (e.g. speaking vs listening; listening vs reading) differ in their effects on learning and memory?
  • Can we use such knowledge to accelerate learning and/or improve memory?
  • What drives variation in language knowledge and are there distinct consequences of greater language knowledge?

 

Research projects

We are currently working on the following research projects:

 

  • What are the causes and consequences of variation in vocabulary size?

Nina Mainz, Alastair Smith & Antje Meyer

Individuals vary in the number of words they know. Research shows that the number of words people know is a good predictor of general language performance. With this project, we try to identify what factors cause such variation in vocabulary size and whether there are distinct consequences of simply knowing more words. To explore such questions, we combine individual difference measures and a novel word learning study to test whether knowing more words predicts future word learning. This is complemented by a computational modelling investigation that aims to isolate consequence from cause.

 

  • When and why does modality affect word learning?

Merel Wolf, Alastair Smith, Antje Meyer & Caroline Rowland

We learn new words either through hearing them or reading them. In this project, we ask whether the modality (spoken vs written) in which a word is experienced has differential effects on learning and memory. Current research suggests modality affects learning, yet a coherent picture of when and why differences exist is yet to be established. Using artificial word learning experiments we try to isolate the factors that may generate differential effects of modality at different levels of the word learning process and how these may change over the course of reading acquisition.

 

  • Why do we learn more from our ‘in-group’?

Sara Iacozza, Shiri Lev-Ari (Royal Holloway, University of London, U.K.), & Antje Meyer

Social characteristics of the speaker have been shown to alter the perception of what is said, as well as predictions about what is going to be said. Furthermore, we know that arbitrary in-group vs out-group identification affects the recall information acquired from different groups. With this project, using artificial word learning studies, we aim to identify which aspects of the word learning process are affected by arbitrary in-group vs out-group identification, e.g. do we pay more attention to what our ‘in-group’ members do and say and/or attribute greater importance to the encoding of ‘in-group’-acquired information.

 

  • How does social structure affect the formation of linguistic structure?

Limor Raviv, Shiri Lev-Ari (Royal Holloway, University of London, U.K.), & Antje Meyer

In this study we examine how social structure can affect the evolving structure of language. The goal of this project is to explore how different aspects of societies (e.g., group size, network structure, identity of the learners, etc.) shape and affect the emergence of linguistic structure in an artificial language game, simulating the process of cultural transmission and communication over time. We examine whether different grammatical structures emerge in different communities and, more specifically, whether limitations of the learning system interact with social factors such as group size to generate systematic variation in emergent linguistic structure.

 

  • Memory as a consequence of language

Laurel Brehm, Eirini Zormpa & Antje Meyer

In this project, we focus on how the processes entailed in producing and comprehending words — such as lexical access, articulation, and conceptual encoding — affect recognition memory. We examine the relationship between known memory phenomena (such as generation effect, production effect, picture superiority effect, and source-monitoring) and language to see what common mechanisms are shared in language and memory.

 

  • Illusory coordination in language

Laurel Brehm, Eirini Zormpa & Antje Meyer

Within this project we use the mistakes people make to show how items are represented in the mind. Mis-remembering “pick up” after seeing “pick” and “up” shows that the two words are encoded separately, informing the mental representation of sentence structure. Mis-remembering seeing a picture of one dog after seeing another shows that we generalise from one token to the broader concept.

 

  • How linguistic structure shapes orthographic transparency and orthographic transparency shapes how we read

Alastair Smith, Padraic Monaghan (Lancaster University, U.K.) & Falk Huettig

Within this study we perform the first quantitative investigation of whether phonological structure determines orthographic transparency in an analysis that includes over 350 languages. This analysis is then combined with computational modelling to explore the impact of variation in orthographic transparency on the emerging reading system. Using a simple triangle model of reading we examine how processing and representations are shaped by differences in orthographic transparency across the world’s writing systems.

 

How do we conduct our research?

To carry out our research we use classical behavioural experiments, e.g. picture naming, picture word interference, picture matching, recognition memory tests, lexical decision, and semantic and/or phonological priming. We also use eye-tracking techniques, individual difference studies, artificial and novel word learning experiments, and statistical & computational modelling.

The Cultural Brain

The Cultural Brain research group, led by Falk Huettig, investigates how cultural inventions – such as written words, numbers, music, and belief systems – shape our mind and brain from the day we are born.

Our research is divided into three themes (the Literate Brain, the Predictive Brain, and the Multimodal Brain), each of which provides us with a unique window for exploring the culturally-shaped mind.

We use behavioural measures, functional and structural neuroimaging techniques, and computational modelling to help us answer the central question: To what extent does culture determine what it means to think as a human?

For more information about our research team and current projects, visit the Cultural Brain research group page.

Individual Differences in Language Skills

Florian Hintz, NWO Language In Interaction

The primary goal of this research programme is to account for, and understand, the balance between universality and variability at all relevant levels of the language system and the interplay with different cognitive systems, such as memory, action, and cognitive control.

Language in Interaction brings together 70 researchers from eight universities and one research institute within the Netherlands to understand this unique capacity.

To find out more about this research programme, visit the Language in Interaction website.

Share this page