You are here: Home People track when talkers say ‘uh’ to predict what comes next

People track when talkers say ‘uh’ to predict what comes next

Speakers tend to say ‘uh’ before uncommon words (‘uh… automobile’) rather than common words (‘car’). In a new eye-tracking study, researchers from the Max Planck Institute for Psycholinguistics show that listeners use this information to predict an uncommon word upon hearing ‘uh’. Moreover, when an ‘atypical’ speaker says ‘uh’ before common words (‘uh… car’), listeners learn to predict common words after ‘uh’ – but only with a native speaker.
People track when talkers say ‘uh’ to predict what comes next

Hans Rutger Bosker

Spontaneous conversation is riddled with disfluencies such as pauses and ‘uhm’s: on average people produce 6 disfluencies every 100 words. But disfluencies do not occur randomly. Instead, ‘uh’ typically occurs before ‘hard-to-name’ low-frequency words (‘uh… automobile’). Previous experiments led by Hans Rutger Bosker from the Max Planck Institute for Psycholinguistics have shown that people can use disfluencies to predict upcoming low-frequency words. But Bosker and his colleagues went one step further. They tested whether listeners would actively track the occurrence of ‘uh’, even when it appeared in unexpected places.

Click on uh... the igloo

The researchers used eye-tracking, which measures people’s looks towards objects on a screen. Two groups of Dutch participants saw two images on a screen (for instance, a hand and an igloo) and heard both fluent and disfluent instructions. However, one group heard a ‘typical’ talker say ‘uh’ before ‘hard-to-name’ low-frequency words (“Click on uh... the igloo”), while the other group heard an ‘atypical’ talker saying ‘uh’ before ‘easy-to-name’ high-frequency words (“Click on uh… the hand”). Would people in this second group track the unexpected occurrences of ‘uh’ and learn to look at the ‘easy-to-name’ object?

 As expected, participants listening to the ‘typical’ talker already looked at the igloo upon hearing the disfluency (‘uh’…; that is well before hearing ‘igloo’). Interestingly, people listening to the ‘atypical’ talker learned to adjust this ‘natural’ prediction. Upon hearing a disfluency (‘uh’…), they learnt to look at the common object, even before hearing the word itself (‘hand’). “We take this as evidence that listeners actively keep track of when and where talkers say ‘uh’ in spoken communication, adjusting what they predict will come next for different talkers”, concludes Bosker.

Speakers with a foreign accent

Would listeners also adjust their expectations with a non-native speaker? In a follow-up experiment, the same sentences were spoken by someone with a heavy Romanian accent. In this experiment, participants did learn to predict uncommon objects from a ‘typical’ non-native talker (saying ‘uh’ before low-frequency words). However, they did not learn to predict high-frequency referents from an ‘atypical’ non-native talker (saying ‘uh’ before high-frequency words) – even though the sentence materials were the same in the native vs. non-native experiment.

Geertje van Bergen, co-author on the paper, explains: “This probably indicates that hearing a few atypical disfluent instructions (e.g., the non-native talker saying ‘uh’ before common words like “hand” and “car”) led listeners to infer that the non-native speaker had difficulty naming even simple words in Dutch. As such, they presumably took the non-native disfluencies to not be predictive of the word to follow - in spite of the clear distributional cues indicating otherwise”. This finding is interesting, as it reveals an interplay between ‘disfluency tracking’ and ‘pragmatic inferencing’: we only track disfluencies if we infer from the talker’s voice that the talker is a ‘reliable’ uhm’er.

A hot topic in psycholinguistics

According to the authors, this is the first evidence of distributional learning in disfluency processing. “We’ve known about disfluencies triggering prediction for more than 10 years now, but we demonstrate that these predictive strategies are malleable. People actively track when particular talkers say ‘uh’ on a moment by moment basis, adjusting their predictions about what will come next”, explains Bosker. Distributional learning has been a hot topic in psycholinguistics the past few years. “We extend this field with evidence for distributional learning of metalinguistic performance cues, namely disfluencies - highlighting the wide scope of distributional learning in language processing.”



Bosker, H. R., Van Os, M., Does, R., & Van Bergen, G. (2019). Counting 'uhm's: how tracking the distribution of native and non-native disfluencies influences online language comprehension. Journal of Memory and Language. Advance online publication. doi:10.1016/j.jml.2019.02.006.

Full text 
About Hans Rutger Bosker


About MPI

This is the MPI

The Max Planck Institute for Psycholinguistics is an institute of the German Max Planck Society. Our mission is to undertake basic research into the psychological,social and biological foundations of language. The goal is to understand how our minds and brains process language, how language interacts with other aspects of mind, and how we can learn languages of quite different types.

The institute is situated on the campus of the Radboud University. We participate in the Donders Institute for Brain, Cognition and Behaviour, and have particularly close ties to that institute's Centre for Cognitive Neuroimaging. We also participate in the Centre for Language Studies. A joint graduate school, the IMPRS in Language Sciences, links the Donders Institute, the CLS and the MPI.



Street address
Wundtlaan 1
6525 XD Nijmegen
The Netherlands

Mailing address
P.O. Box 310
6500 AH Nijmegen
The Netherlands

Phone:   +31-24-3521911
Fax:        +31-24-3521213

Public Outreach Officer
Marjolein Scherphuis