Kristijan Armeni

How does our ability to process complex natural languages emerge from brain dynamics? In a series of projects, we are analysing electrophysiological dynamics (magnetoencephalography, MEG) with brain-inspired models for language processing – known as artificial neural networks. We allow the networks to adjust their internal “synapses” through exposure to MEG dynamics recorded during audiobook listening. Subsequently, we stimulate such “brain-optimised” networks to unseen narratives and record both their predictive success and internal dynamics. By doing so, we will gain an insight into how computations in the algorithmic “black box” relate to the observed brain signals.

Artificial neural networks are powerful algorithms used to process natural language; however, their architectures represent a considerable abstraction of the actual biophysical processes in the brain. Neural dynamics are short-lived and fast, unfolding in milliseconds, but information during language processing has to be maintained in the memory for seconds and minutes. What properties endow biological neural networks with memory for processing structured sequences such as natural language? To answer this question, we use the “synthetic biology” approach; we are modelling neural dynamics with biologically plausible networks of spiking neurons whose task is to resolve non-adjacent dependencies in language-like sequences of symbols. By systematically varying the membrane dynamics of neurons, we will determine how processing memory emerges from networks of otherwise “forgetful” neurons.

Share this page