There is more than one way to convey most ideas using language: almost any picture could be described in multiple ways (it's a couch/ sofa/ it's Mom's favorite place to sit), using multiple grammatical forms (the dog is being chased by the cat/ the cat chases the dog; the staff is/ are on strike). This means that speakers must juggle multiple possible representations while preparing to speak, and listeners must be able to expect more than one candidate utterance from a speaker.
Similarly, a lot of language use also occurs in the context of conversation, which means that one often needs to plan an utterance while simultaneously listening to an interlocutor. This means that in order to be resilient speakers and listeners, our cognitive facilities must allow us to simultaneously represent -- and navigate between-- multiple ideas and multiple linguistic plans.
My group (the Juggling Act cluster) focuses on how individuals juggle multiple simultaneous representations in comprehension and production. In my own work, I focus on the question of how individuals integrate multiple candidate representations of meaning (words, ideas) and form (syntax, phonology) through time in language production and comprehension with support from cognitive mechanisms such as memory and attention.
I test this using a variety of methods: eye-tracking (in production and comprehension, with one and two participants simultaneously, or co-registered with ERP), RT or behavioral studies, and computational modeling.
Here are some questions I have worked on recently:
In my teaching, I strive to balance content with practical skills: in order to properly use a skill (e.g., critical thinking, statistical analysis, experimental design), one needs to situate it in a context in which it is useful. See below for some recent courses I have taught.
2019 (2020 cancelled)-- Mixed Effect Models. Co-Instructor (with Phillip Alday).
Radboud University Summer School
2019-- Critical Peer Review. Co-Instructor (with Sonja Vernes).
Max Planck Institute / IMPRS
2019, 2020 -- Data Visualization. Instructor.
Max Planck Institute / IMPRS
Selected materials from courses and one-off workshops appear on GitHub.
My personal GitHub: https://github.com/laurelbrehm/Teaching-Tutorials
Work done with RLadies Nijmegen: https://github.com/RLadiesNijmegen
Cross-validation for model selection
Code from Brehm & Meyer (under review): 'Planning when to say'.
Piecewise regression for determining category structure of predictors
See Brehm & Goldrick (2017) 'Distinguishing discrete and gradient category structure in language'
User-defined orthogonal code checker
An Excel document for checking whether user-defined contrast codes are orthogonal.
Link to Workbook (.xlsx)
Why do kids make certain types of speech errors?
A column for Babel magazine's "Ask A Linguist" feature. In English.
What are the tools we use for research on language production?
A YouTube video explaining how to 'see sound' (with spectrograms, obviously!). In Dutch.
Last updated 15 June 2020