Laurel Ellen Brehm



There is more than one way to convey most ideas using language: almost any picture could be described in multiple ways (it's a couch/ sofa/ it's Mom's favorite place to sit), using multiple grammatical forms (the dog is being chased by the cat/ the cat chases the dog; the staff is/ are on strike). This means that speakers must juggle multiple possible representations while preparing to speak, and listeners must be able to expect more than one candidate utterance from a speaker.

Similarly, a lot of language use also occurs in the context of conversation, which means that one often needs to plan an utterance while simultaneously listening to an interlocutor. This means that in order to be resilient speakers and listeners, our cognitive facilities must allow us to simultaneously represent -- and navigate between--  multiple ideas and multiple linguistic plans. 

My group (the Juggling Act cluster) focuses on how individuals juggle multiple simultaneous representations in comprehension and production.  In my own work, I focus on the question of how individuals integrate multiple candidate representations of meaning (words, ideas) and form (syntax, phonology) through time in language production and comprehension with support from cognitive mechanisms such as memory and attention.

I test this using a variety of methods: eye-tracking (in production and comprehension, with one and two participants simultaneously, or co-registered with ERP), RT or behavioral studies, and computational modeling.

Here are some questions I have worked on recently:

  • What do we need to mentally represent while listening and speaking in a single-task or in a conversational (dual-task) context? How does this vary based upon the communicative goals of the situation?
  • How do speech errors (agreement errors, lexical choice errors) reflect mental representations of meaning and structure?
  • How do listeners interpret / process / represent errors or variable forms? How does this change with experience?
  • What memory mechanisms does language recruit? Does this differ between production and comprehension? (with Erini Zormpa )
  • How does splitting one's attention between speaking and listening affect production?  What planning strategies allow us to do this successfully-- and what causes us to fail? (with Jieying He )
  • In what way(s) does cognitive control contribute to how fast/slow we can produce hard materials? (with Aitor San José)

In my teaching, I strive to balance content with practical skills: in order to properly use a skill (e.g., critical thinking, statistical analysis, experimental design), one needs to situate it in a context in which it is useful.  See below for some recent courses I have taught.

2019 (2020 cancelled)-- Mixed Effect Models. Co-Instructor (with Phillip Alday).
Radboud University Summer School

2019-- Critical Peer Review.  Co-Instructor (with Sonja Vernes).
Max Planck Institute / IMPRS

2019, 2020 -- Data Visualization. Instructor.
Max Planck Institute / IMPRS


Selected materials from courses and one-off workshops appear on GitHub.

My personal GitHub:

Work done with RLadies Nijmegen:


Cross-validation for model selection

Code from Brehm & Meyer (under review): 'Planning when to say'.

Functions (.R)--- Vignette (.R)


Piecewise regression for determining category structure of predictors

See Brehm & Goldrick (2017) 'Distinguishing discrete and gradient category structure in language'


Functions (.R) --- Demo (.R)

User-defined orthogonal code checker

An Excel document for checking whether user-defined contrast codes are orthogonal.

Link to Workbook (.xlsx)


Why do kids make certain types of speech errors?

A column for Babel magazine's "Ask A Linguist" feature. In English.

What are the tools we use for research on language production?

A YouTube video explaining how to 'see sound' (with spectrograms, obviously!). In Dutch.



Last updated 15 June 2020


Share this page