Presentations

Displaying 1 - 15 of 15
  • Quaresima, A., Van den Broek, D., Fitz, H., Duarte, R., Hagoort, P., & Petersson, K. M. (2022). The Tripod neuron: a minimal model of dendric computation. Poster presented at Dendrites 2022: Dendritic anatomy, molecules and function, Heraklion, Greece.
  • Quaresima, A., Fitz, H., Duarte, R., Van den Broek, D., Hagoort, P., & Petersson, K. M. (2022). Dendritic NMDARs facilitate Up and Down states. Poster presented at Bernstein Conference 2022, Berlin, Germany.
  • Quaresima, A., Van den Broek, D., Fitz, H., Duarte, R., & Petersson, K. M. (2020). A minimal reduction of dendritic structure and its functional implication for sequence processing in biological neurons. Poster presented at the Twelfth Annual (Virtual) Meeting of the Society for the Neurobiology of Language (SNL 2020).
  • Armeni, K., Van den Broek, D., & Fitz, H. (2019). Neuronal memory for processing sequences with non-adjacent dependencies. Poster presented at the 15th Bernstein Conference on Computational Neuroscience, Berlin, Germany.
  • Fitz, H., & Van den Broek, D. (2019). Modeling the mental lexicon pt.2. Talk presented at the Language in Interaction Big Question 1 workshop, Radboud University. Nijmegen, The Netherlands. 2019.
  • Fitz, H., & Van den Broek, D. (2019). Neurobiological modeling of the mental lexicon. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Duarte, R., Uhlmann, M., Van den Broek, D., Fitz, H., Petersson, K. M., & Morrison, A. (2018). Encoding symbolic sequences with spiking neural reservoirs. Talk presented at the 2018 International Joint Conference on Neural Networks (IJCNN). Rio de Janeiro, Brazil. 2018-07-08 - 2018-07-13.
  • Fitz, H., & Van den Broek, D. (2018). Modeling the mental lexicon pt.1. Talk presented at the Language in Interaction Big Question 1 workshop, Max Planck Institute for Psycholinguistics. Nijmegen, The Netherlands. 2018.
  • Van den Broek, D. (2018). From genes to language: How CNTNAP2 affects sentence processing in human neural networks. Poster presented at the IMPRS Conference on Interdisciplinary Approaches in the Language Sciences, Nijmegen, The Netherlands.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2017). Activity-silent short-term memory for language processing. Poster presented at the 1st Annual Conference on Cognitive Computational Neuroscience (CCN 2017), New York, NY, USA.
  • Uhlmann, M., Van den Broek, D., Fitz, H., Hagoort, P., & Petersson, K. M. (2017). Ambiguity resolution in a spiking network model of sentence comprehension. Poster presented at the 1st Annual Conference on Cognitive Computational Neuroscience (CCN 2017), New York, NY, USA.
  • Van den Broek, D., Uhlmann, M., Duarte, R., Fitz, H., Hagoort, P., & Petersson, K. M. (2017). The best spike filter kernel is a neuron. Poster presented at the 1st Annual Conference on Cognitive Computational Neuroscience (CCN 2017), New York, NY, USA.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Integrating sentence meaning over time requires memory ranging from milliseconds (words) to seconds (sentences) and minutes (discourse). How do transient events like action potentials in the human language system support memory at these different temporal scales? Here we investigate the nature of processing memory in a neurobiologically motivated model of sentence comprehension. The model was a recurrent, sparsely connected network of spiking neurons. Synaptic weights were created randomly and there was no adaptation or learning. As input the network received word sequences generated from construction grammar templates and their syntactic alternations (e.g., active/passive transitives, transfer datives, caused motion). The language environment had various features such as tense, aspect, noun/verb number agreement, and pronouns which created positional variation in the input. Similar to natural speech, word durations varied between 50ms and 0.5s of real, physical time depending on their length. The model's task was to incrementally interpret these word sequences in terms of semantic roles. There were 8 target roles (e.g., Agent, Patient, Recipient) and the language generated roughly 1,2m distinct utterances from which a sequence of 10,000 words was randomly selected and filtered through the network. A set of readout neurons was then calibrated by means of logistic regression to decode the internal network dynamics onto the target semantic roles. In order to accomplish the role assignment task, network states had to encode and maintain past information from multiple cues that could occur several words apart. To probe the circuit's memory capacity, we compared models where network connectivity, the shape of synaptic currents, and properties of neuronal adaptation were systematically manipulated. We found that task-relevant memory could be derived from a mechanism of neuronal spike-rate adaptation, modelled as a conductance that hyperpolarized the membrane following a spike and relaxed to baseline exponentially with a fixed time-constant. By acting directly on the membrane potential it provided processing memory that allowed the system to successfully interpret its sentence input. Near optimal performance was also observed when an exponential decay model of post-synaptic currents was added into the circuit, with time-constants approximating excitatory NMDA and inhibitory GABA-B receptor dynamics. Thus, the information flow was extended over time, creating memory characteristics comparable to spike-rate adaptation. Recurrent connectivity, in contrast, only played a limited role in maintaining information; an acyclic version of the recurrent circuit achieved similar accuracy. This indicates that random recurrent connectivity at the modelled spatial scale did not contribute additional processing memory to the task. Taken together, these results suggest that memory for language might be provided by activity-silent dynamic processes rather than the active replay of past input as in storage-and-retrieval models of working memory. Furthermore, memory in biological networks can take multiple forms on a continuum of time-scales. Therefore, the development of neurobiologically realistic, causal models will be critical for our understanding of the role of memory in language processing.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2016). Bilbao, Spain. 2016-09-01 - 2016-09-03.

    Abstract

    Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, UK
  • Van den Broek, D., Uhlmann, M., Fitz, H., Hagoort, P., & Petersson, K. M. (2016). Spiking neural networks for semantic processing. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.

Share this page