Presentations

Displaying 1 - 33 of 33
  • Quaresima, A., Fitz, H., Duarte, R., Hagoort, P., & Petersson, K. M. (2023). Dendritic non-linearity supports the formation and reactivation of word memories as cell assemblies. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
  • Quaresima, A., Fitz, H., Duarte, R., Hagoort, P., & Petersson, K. M. (2023). Dendritic non-linearity supports the formation and reactivation of word memories as cell assemblies. Talk presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023). Marseille, France. 2023-10-24 - 2023-10-26.
  • Quaresima, A., Van den Broek, D., Fitz, H., Duarte, R., Hagoort, P., & Petersson, K. M. (2022). The Tripod neuron: a minimal model of dendric computation. Poster presented at Dendrites 2022: Dendritic anatomy, molecules and function, Heraklion, Greece.
  • Quaresima, A., Fitz, H., Duarte, R., Van den Broek, D., Hagoort, P., & Petersson, K. M. (2022). Dendritic NMDARs facilitate Up and Down states. Poster presented at Bernstein Conference 2022, Berlin, Germany.
  • Vlachos, P.-E., Quaresima, A., & Fitz, H. (2022). Sequence learning and replay through excitatory and inhibitory synaptic plasticity in recurrent spiking neural networks. Poster presented at Bernstein Conference 2022, Berlin, Germany.
  • Quaresima, A., Van den Broek, D., Fitz, H., Duarte, R., & Petersson, K. M. (2020). A minimal reduction of dendritic structure and its functional implication for sequence processing in biological neurons. Poster presented at the Twelfth Annual (Virtual) Meeting of the Society for the Neurobiology of Language (SNL 2020).
  • Armeni, K., Van den Broek, D., & Fitz, H. (2019). Neuronal memory for processing sequences with non-adjacent dependencies. Poster presented at the 15th Bernstein Conference on Computational Neuroscience, Berlin, Germany.
  • Fitz, H., & Van den Broek, D. (2019). Modeling the mental lexicon pt.2. Talk presented at the Language in Interaction Big Question 1 workshop, Radboud University. Nijmegen, The Netherlands. 2019.
  • Fitz, H., & Chang, F. (2019). Language ERPs reflect learning through prediction error propagation. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Fitz, H. (2019). Language modeling from neurobiological principles [invited talk]. Talk presented at the Department of Language and Linguistics. University of Salzburg. Salzburg, Austria. 2019.
  • Fitz, H., & Van den Broek, D. (2019). Neurobiological modeling of the mental lexicon. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Quaresima, A., Duarte, R., & Fitz, H. (2019). Review of computational models of cortical microcircuits. Poster presented at the Max Planck Neuro Symposium. Max Planck Institute of Neurobiology, Martinsried, Germany.
  • Duarte, R., Uhlmann, M., Van den Broek, D., Fitz, H., Petersson, K. M., & Morrison, A. (2018). Encoding symbolic sequences with spiking neural reservoirs. Talk presented at the 2018 International Joint Conference on Neural Networks (IJCNN). Rio de Janeiro, Brazil. 2018-07-08 - 2018-07-13.
  • Fitz, H., & Van den Broek, D. (2018). Modeling the mental lexicon pt.1. Talk presented at the Language in Interaction Big Question 1 workshop, Max Planck Institute for Psycholinguistics. Nijmegen, The Netherlands. 2018.
  • Fitz, H. (2018). Neurobiological models of the mental lexicon [invited talk]. Talk presented at the Language in Interaction Consortium meeting, Institute for Logic, Language and Computation. University of Amsterdam. Amsterdam, The Netherlands. 2018.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2017). Activity-silent short-term memory for language processing. Poster presented at the 1st Annual Conference on Cognitive Computational Neuroscience (CCN 2017), New York, NY, USA.
  • Fitz, H. (2017). Language ERPs reflect learning through prediction error propagation [Invited talk]. Talk presented at the Workshop on the Neurobiology of Prediction in Language Processing. Nijmegen, The Netherlands. 2017-01.
  • Uhlmann, M., Van den Broek, D., Fitz, H., Hagoort, P., & Petersson, K. M. (2017). Ambiguity resolution in a spiking network model of sentence comprehension. Poster presented at the 1st Annual Conference on Cognitive Computational Neuroscience (CCN 2017), New York, NY, USA.
  • Van den Broek, D., Uhlmann, M., Duarte, R., Fitz, H., Hagoort, P., & Petersson, K. M. (2017). The best spike filter kernel is a neuron. Poster presented at the 1st Annual Conference on Cognitive Computational Neuroscience (CCN 2017), New York, NY, USA.
  • Fitz, H., Hagoort, P., & Petersson, K. M. (2016). A spiking recurrent network for semantic processing. Poster presented at the Nijmegen Lectures 2016, Nijmegen, The Netherlands.
  • Fitz, H. (2016). Language modelling from neurobiological principles [Invited talk]. Talk presented at the Amsterdam Center for Brain and Cognition Summer School on Computational Modelling and Cognitive Development. Amsterdam, The Netherlands. 2016.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Integrating sentence meaning over time requires memory ranging from milliseconds (words) to seconds (sentences) and minutes (discourse). How do transient events like action potentials in the human language system support memory at these different temporal scales? Here we investigate the nature of processing memory in a neurobiologically motivated model of sentence comprehension. The model was a recurrent, sparsely connected network of spiking neurons. Synaptic weights were created randomly and there was no adaptation or learning. As input the network received word sequences generated from construction grammar templates and their syntactic alternations (e.g., active/passive transitives, transfer datives, caused motion). The language environment had various features such as tense, aspect, noun/verb number agreement, and pronouns which created positional variation in the input. Similar to natural speech, word durations varied between 50ms and 0.5s of real, physical time depending on their length. The model's task was to incrementally interpret these word sequences in terms of semantic roles. There were 8 target roles (e.g., Agent, Patient, Recipient) and the language generated roughly 1,2m distinct utterances from which a sequence of 10,000 words was randomly selected and filtered through the network. A set of readout neurons was then calibrated by means of logistic regression to decode the internal network dynamics onto the target semantic roles. In order to accomplish the role assignment task, network states had to encode and maintain past information from multiple cues that could occur several words apart. To probe the circuit's memory capacity, we compared models where network connectivity, the shape of synaptic currents, and properties of neuronal adaptation were systematically manipulated. We found that task-relevant memory could be derived from a mechanism of neuronal spike-rate adaptation, modelled as a conductance that hyperpolarized the membrane following a spike and relaxed to baseline exponentially with a fixed time-constant. By acting directly on the membrane potential it provided processing memory that allowed the system to successfully interpret its sentence input. Near optimal performance was also observed when an exponential decay model of post-synaptic currents was added into the circuit, with time-constants approximating excitatory NMDA and inhibitory GABA-B receptor dynamics. Thus, the information flow was extended over time, creating memory characteristics comparable to spike-rate adaptation. Recurrent connectivity, in contrast, only played a limited role in maintaining information; an acyclic version of the recurrent circuit achieved similar accuracy. This indicates that random recurrent connectivity at the modelled spatial scale did not contribute additional processing memory to the task. Taken together, these results suggest that memory for language might be provided by activity-silent dynamic processes rather than the active replay of past input as in storage-and-retrieval models of working memory. Furthermore, memory in biological networks can take multiple forms on a continuum of time-scales. Therefore, the development of neurobiologically realistic, causal models will be critical for our understanding of the role of memory in language processing.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2016). Bilbao, Spain. 2016-09-01 - 2016-09-03.

    Abstract

    Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, UK
  • Uhlmann, M., Tsoukala, C., Van de Broek, D., Fitz, H., & Petersson, K. M. (2016). Dealing with the problem of two: Temporal binding in sentence understanding with neural networks. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
  • Van den Broek, D., Uhlmann, M., Fitz, H., Hagoort, P., & Petersson, K. M. (2016). Spiking neural networks for semantic processing. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
  • Fitz, H., & Chang, F. (2015). Prediction in error-based learning explains sentence-level ERP effects. Talk presented at the 21st Architectures and Mechanisms for Language Processing Conference (AMLaP 2015). Valetta, Malta. 2015-09-03 - 2015-09-05.
  • Fitz, H., Hagoort, P., & Petersson, K. M. (2014). A spiking recurrent neural network for semantic processing. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Sentence processing requires the ability to establish thematic relations between constituents. Here we investigate the computational basis of this ability in a neurobiologically motivated comprehension model. The model has a tripartite architecture where input representations are supplied by the mental lexicon to a network that performs incremental thematic role assignment. Roles are combined into a representation of sentence-level meaning by a downstream system (semantic unification). Recurrent, sparsely connected, spiking networks were used which project a time-varying input signal (word sequences) into a high-dimensional, spatio-temporal pattern of activations. Local, adaptive linear read-out units were then calibrated to map the internal dynamics to desired output (thematic role sequences) [1]. Read-outs were adjusted on network dynamics driven by input sequences drawn from argument-structure templates with small variation in function words and larger variation in content words. Models were trained on sequences of 10K words for 200ms per word at a 1ms resolution, and tested on novel items generated from the language. We found that a static, random recurrent spiking network outperformed models that used only local word information without context. To improve performance, we explored various ways of increasing the model’s processing memory (e.g., network size, time constants, sparseness, input strength, etc.) and employed spiking neurons with more dynamic variables (leaky integrate-and-fire versus Izhikevich-neurons). The largest gain was observed when the model’s input history was extended to include previous words and/or roles. Model behavior was also compared for localist and distributed encodings of word sequences. The latter were obtained by compressing lexical co-occurrence statistics into continuous-valued vectors [2]. We found that performance for localist-input was superior even though distributed representations contained extra information about word context and semantic similarity. Finally, we compared models that received input enriched with combinations of semantic features, word-category, and verb sub-categorization labels. Counter-intuitively, we found that adding this information to the model’s lexical input did not further improve performance. Consistent with previous results, however, performance improved for increased variability in content words [3]. This indicates that the approach to comprehension taken here might scale to more diverse and naturalistic language input. Overall, the results suggest that active processing memory beyond pure state-dependent effects is important for sentence interpretation, and that memory in neurobiological systems might be actively computing [4]. Future work therefore needs to address how the structure of word representations interacts with enhanced processing memory in adaptive spiking networks. [1] Maass W., Natschläger T., & Markram H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14: 2531-2560. [2] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word represen-tations in vector space. Proceedings of the International Conference on Learning Represen-tations, Scottsdale/AZ. [3] Fitz, H. (2011). A liquid-state model of variability effects in learning nonadjacent dependencies. Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Austin/TX. [4] Petersson, K.M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets. Philo-sophical Transactions of the Royal Society B 367: 1971-1883.
  • Fitz, H., & Chang, F. (2014). Learning auxiliary inversion from structured messages. Poster presented at the 13th International Congress of the International Association for the Study of Child Language (IASCL), Amsterdam, the Netherlands.
  • Fitz, H., & Chang, F. (2014). Learning structure-dependent processing from combinatorial meaning. Talk presented at 20th Architectures and Mechanisms for Language Processing Conference (AMLaP 2014). Edinburgh, Scotland. 2014-09-04 - 2014-09-06.
  • Chang, F., Baumann, M., Pappert, S., & Fitz, H. (2013). Sprechen lemmas Deutsch? A verb position effect in German structural Priming. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Fitz, H. (2013). A primer on reservoir computing: Neural Networks and Symbolic Reasoning. Talk presented at Institute for Logic, Language and Computation. Amsterdam, The Netherlands. 2013-03.
  • Fitz, H. (2012). Relative clause processing in a connectionist model of sentence production. Talk presented at the Dondersdag, Language and Communication theme. Nijmegen, the Netherlands. 2012-10.
  • Fitz, H., & Chang, F. (2006). A PDP model of complex sentence production. Poster presented at Cognitio 2006 - Young researcher conference in cognitive science, Université du Québec à Montréal.

    Abstract

    Recursive productivity is considered a core property of naturallanguage and the human language faculty (Hauser, Chomsky & Fitch, 2002).It has been argued that the capacity to produce an unbounded varietyof utterances requires symbolic capabilities. Lacking structuredrepresentations, connectionist models of language processing arefrequently criticized for their failure to generalize symbolically(Hadley, 1994; Marcus, 1998).Addressing these issues, we present a neural-symbolic learning modelof sentence production, called the recursive dual-path model, whichcan cope with complex sentence structure in the form of embeddedsubordination of multiple levels.The model has separate pathways, one for mapping messages to words andone for sequence learning. The message is represented through bindingof thematic roles to concepts by weight and is inspired by spatialprocessing of visual input. In selecting syntactic frames, thesequencing system is guided by an event-semantics layer which providesinformation about clause attachment, tense, aspect, and the relativeprominence of message components.The model is tested on a structurally complex language built fromsimple clause constructions which are basic to human experience(Goldberg, 1995). We investigate the model's learning behaviorconcerning complex multi-clausal utterances and show that itsperformance matches differential trends in humans. Furthermore, weexplore its ability to produce novel embedded structures and to map`familiar' constituents to novel roles at novel sentence positions.The recursive dual-path model is joint work with Franklin Chang,postdoc researcher at NTT Communication Science Laboratory Kyoto,Japan, and based on:Chang, F. (2002) Symbolically speaking: A connectionist model ofsentence production. Cognitive Science, 26(5), 609-651.

Share this page