Language and Predictive Computation
How do we understand language? What does the brain actually do when we read or hear a sentence? And how is it that Large Language Models — trained on nothing but predicting the next word — have not only mastered language, but also turned out to be the most accurate models of human brain responses to language?
In the LPC Group, we use the tools of modern AI to model language in the human mind and brain. We also run the logic in reverse: drawing on what psychology and neuroscience teach us about human language processing, we build language models constrained by the human cognitive architecture.
Ultimately, we aim to understand how the human brain learns and represents language, and to build more cognitively faithful models of human language processing.
Share this page