Publications

Displaying 1 - 4 of 4
  • Fitz, H., Uhlmann, M., Van den Broek, D., Duarte, R., Hagoort, P., & Petersson, K. M. (2020). Neuronal spike-rate adaptation supports working memory in language processing. Proceedings of the National Academy of Sciences of the United States of America, 117(34), 20881-20889. doi:10.1073/pnas.2000222117.

    Abstract

    Language processing involves the ability to store and integrate pieces of
    information in working memory over short periods of time. According to
    the dominant view, information is maintained through sustained, elevated
    neural activity. Other work has argued that short-term synaptic facilitation
    can serve as a substrate of memory. Here, we propose an account where
    memory is supported by intrinsic plasticity that downregulates neuronal
    firing rates. Single neuron responses are dependent on experience and we
    show through simulations that these adaptive changes in excitability pro-
    vide memory on timescales ranging from milliseconds to seconds. On this
    account, spiking activity writes information into coupled dynamic variables
    that control adaptation and move at slower timescales than the membrane
    potential. From these variables, information is continuously read back into
    the active membrane state for processing. This neuronal memory mech-
    anism does not rely on persistent activity, excitatory feedback, or synap-
    tic plasticity for storage. Instead, information is maintained in adaptive
    conductances that reduce firing rates and can be accessed directly with-
    out cued retrieval. Memory span is systematically related to both the time
    constant of adaptation and baseline levels of neuronal excitability. Inter-
    ference effects within memory arise when adaptation is long-lasting. We
    demonstrate that this mechanism is sensitive to context and serial order
    which makes it suitable for temporal integration in sequence processing
    within the language domain. We also show that it enables the binding of
    linguistic features over time within dynamic memory registers. This work
    provides a step towards a computational neurobiology of language.
  • Duarte, R., Uhlmann, M., Van den Broek, D., Fitz, H., Petersson, K. M., & Morrison, A. (2018). Encoding symbolic sequences with spiking neural reservoirs. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN). doi:10.1109/IJCNN.2018.8489114.

    Abstract

    Biologically inspired spiking networks are an important tool to study the nature of computation and cognition in neural systems. In this work, we investigate the representational capacity of spiking networks engaged in an identity mapping task. We compare two schemes for encoding symbolic input, one in which input is injected as a direct current and one where input is delivered as a spatio-temporal spike pattern. We test the ability of networks to discriminate their input as a function of the number of distinct input symbols. We also compare performance using either membrane potentials or filtered spike trains as state variable. Furthermore, we investigate how the circuit behavior depends on the balance between excitation and inhibition, and the degree of synchrony and regularity in its internal dynamics. Finally, we compare different linear methods of decoding population activity onto desired target labels. Overall, our results suggest that even this simple mapping task is strongly influenced by design choices on input encoding, state-variables, circuit characteristics and decoding methods, and these factors can interact in complex ways. This work highlights the importance of constraining computational network models of behavior by available neurobiological evidence.
  • Frank, S. L., & Fitz, H. (2016). Reservoir computing and the Sooner-is-Better bottleneck [Commentary on Christiansen & Slater]. Behavioral and Brain Sciences, 39: e73. doi:10.1017/S0140525X15000783.

    Abstract

    Prior language input is not lost but integrated with the current input. This principle is demonstrated by “reservoir computing”: Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space. Earlier inputs can be retrieved from this projection, albeit less reliably so as more input is received. The bottleneck is therefore not “Now-or-Never” but “Sooner-is-Better.
  • Poletiek, F. H., Fitz, H., & Bocanegra, B. R. (2016). What baboons can (not) tell us about natural language grammars. Cognition, 151, 108-112. doi:10.1016/j.cognition.2015.04.016.

    Abstract

    Rey et al. (2012) present data from a study with baboons that they interpret in support of the idea that center-embedded structures in human language have their origin in low level memory mechanisms and associative learning. Critically, the authors claim that the baboons showed a behavioral preference that is consistent with center-embedded sequences over other types of sequences. We argue that the baboons’ response patterns suggest that two mechanisms are involved: first, they can be trained to associate a particular response with a particular stimulus, and, second, when faced with two conditioned stimuli in a row, they respond to the most recent one first, copying behavior they had been rewarded for during training. Although Rey et al. (2012) ‘experiment shows that the baboons’ behavior is driven by low level mechanisms, it is not clear how the animal behavior reported, bears on the phenomenon of Center Embedded structures in human syntax. Hence, (1) natural language syntax may indeed have been shaped by low level mechanisms, and (2) the baboons’ behavior is driven by low level stimulus response learning, as Rey et al. propose. But is the second evidence for the first? We will discuss in what ways this study can and cannot give evidential value for explaining the origin of Center Embedded recursion in human grammar. More generally, their study provokes an interesting reflection on the use of animal studies in order to understand features of the human linguistic system.

Share this page