Language processing involves at least two functional sub-components (Hagoort, 2005); the long-term storage of words and their phonological, morphosyntactic, and semantic features which are activated by auditory or visual input (Mental Lexicon), and a combinatorial processor that integrates information into a sentence-level interpretation (Unification). It is currently not known how words are encoded into the neurobiological infrastructure of the Mental Lexicon, or how they are maintained over time, or how they are retrieved from phonological or orthographic cues during processing.
My PhD project aims to build a neurobiologically realistic model of the Mental Lexicon based on simulated spiking recurrent neural networks and theoretical insight. Previous work on engram formation in computational neuroscience suggests that long-term storage relies on the interaction of several unsupervised plasticity principles at different timescales. In this project, I investigate how these plasticity principles interact with structural network properties. These include structure in the dendritic tree of neurons, lateral connectivity in cortical layers, and connectivity across layers in cortical columns. The project also addresses the issue of memory consolidation and the plasticity-stability dilemma. Network models of the Mental Lexicon are integrated with Unification networks (Fitz et al., 2019) in order to test various linguistic theories concerning the proposed feature structure of words in the larger context of combinatorial sentence processing.