Language and Computation in Neural Systems

Language and Computation in Neural Systems


The focus of our research group is to understand the computational principles and mechanisms that underlie the representation and processing of human language.  Our aim is to develop a theory about how the brain generates human language that is based on principles from across the language sciences, the cognitive and computational sciences, and neuroscience -- and to do so in a way that stays faithful to the constraints on neural computation, to the formal properties of language, and to human behavior. 


The LaCNS Group is embedded at the MPI and at the Donders Centre for Cognitive Neuroimaging (DCCN) at Radboud University. Our starting point is an interdisciplinary approach that asserts that any theory of how the brain represents and processes language must stay faithful to linguistic, computational, neuroscientific, and behavioral principles. Our focus is on the role of “rhythmic computation” as a mechanism for symbolic representations in brain-like systems. We create theoretical models and computational implementations. Then, neuroscientific experiments are designed to test if the brain solves the problem using similar mechanisms.



Andrea E. Martin

Research Group Leader
Language and Computation in Neural Systems
+31 24 3521585
Andrea [dot] Martin [at] mpi [dot] nl
More detailed information

As humans, we can produce and understand words and sentences that we have never heard before, as long as we (and the words and sentences) play by the rules. For example, although we have specific expectations about what a given word should sound like, we do not require an exact physical copy, as a machine might, in order to recognize it. Nor do we fail to recognize a word if a person previously unknown to us produces it, or uses it in combination with a word we haven’t heard before. Although we might learn a word in a given phrase or sentence context, or might tend to experience that word more often in one context than in another, we are by no means limited to recognizing or using that word only in that context, nor only in related contexts, nor only in the contexts that we have ever experienced it in. 

As such, a marvelous expressive capacity is extended to us - the ability to generate and express formal structures that lead to contextually-specific compositional meanings.  This fact is particularly startling if you consider that human language is processed and generated by a biological organ whose general remit is to be driven by statistical regularities in its environment.  The human brain manifests a paradox when it comes to language: Despite the clear importance of statistical knowledge and distributional information during language use and language acquisition, our everyday language behaviors exemplify an ability to break free from the very (statistical) vice that bootstrapped us up into the realm of natural language users in the first place. While this capacity may seem pedestrian to us, it sets language apart from other perception-action systems and makes language behavior vexingly difficult to account for from a neuroscientist's and computationalist's point of view. 


One of the system properties that underlies this capacity in language is compositionality, whereby units or structures compose (and decompose) into meanings that are determined by the constituent parts and the rules used to combine them. The formal study of language has revealed the pantheon of linguistic forms that the systematicity of mind can take, and the last century has also seen astonishing progress in neuroscience and in artificial intelligence. But all this remarkable progress has yet to offer a satisfying explanation as to how the defining features of human language arise within the constraints of a neurophysiological system. Without an explanatory neurophysiological and computational account of the quintessential properties of human language - of hierarchical structure and domain, of function application and scope, and most definitely, of compositionality - our theories of language and the human mind and brain seem startlingly incomplete. 

Grants and awards

Max Planck Independent Research Group (2020-2024)

This 5-year program is the main funding to set up the "Language and Computation in Neural Systems" (Max Planck Society; 2020-2024).

2019   Aspasia Research Grant (Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO))

2018   VIDI Research Grant (NWO, 2019-2024) “The rhythms of computation: A combinatorial mechanism for language production and comprehension"

Shared Grants (co-PI or team leader)

2019   Language in Interaction Consortium: Big Question 5 (co-PI with Prof.  Roshan Cools and team leader sub-questions 2 and 3; share = 50%) (NWO; 2019-2023)          

2017   Research Grant (co-PI with Dr.  Patrick Sturt; share = 50%) (The Leverhulme Trust, United Kingdom, 2017-2019) “Integration of Information in Reading"

Share this page