Joost Rommers defends PhD on November 25
November 22, 2013
Joost Rommers studied interactions between language and other cognitive functions as listeners and readers processed and anticipated language referring to objects. In one of his experiments, he noticed a rapid linking process between language and vision: as participants listened to sentences, their eye movements showed that they began mapping words onto objects at a visual level before they had even heard these words. In some sense, Rommers observed that predictive sentence contexts allow listeners to "see" what is coming next.
In another set of experiments, he showed that visual representations do not influence language processing performance equally in all tasks. The context is crucial for the activation of visual information.
Links between language, vision and attention
Rommers also looked at the underlying mechanisms and found that predictive language processing is supported in part by mechanisms that are not unique to language processing. "For instance, participants who predict strongly in sentence contexts also tend to rely strongly on predictive cues in a nonverbal task", Rommers notes. His findings do not confirm strong forms of theories that propose predictions during comprehension to be generated by covert language production, because the brain activity underlying reading predictable sentences is distinct from that of completing sentences out loud. "Prediction is likely to rely on multiple mechanisms", he concludes. "My thesis underscored context-dependent links between language, vision and attention."