Link to live-stream
When we comprehend spoken language, we make use of lots of different pieces of information. Physically speaking, speech is an acoustic signal – but we can combine this acoustic information with our previous knowledge and extract meaning from the acoustics. How exactly do we do this? In her doctoral thesis, Greta Kaufeld asked how listeners combine different pieces of information in order to generate meaning from sound.
She conducted EEG and eye-tracking experiments where people listened to sentences that had been modified in different ways: for example, by changing the properties of the acoustic signal, or by varying the amount of structure and meaning that could be generated. This allowed her to study in more detail what kinds of information listeners draw on when they comprehend spoken language. Her results showed that listeners are actually very flexible in how the combine different pieces of information, and that they can quickly adapt in the case of uncertainty.