Presentations

Displaying 1 - 13 of 13
  • Fitz, H., Hagoort, P., & Petersson, K. M. (2016). A spiking recurrent network for semantic processing. Poster presented at the Nijmegen Lectures 2016, Nijmegen, The Netherlands.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Integrating sentence meaning over time requires memory ranging from milliseconds (words) to seconds (sentences) and minutes (discourse). How do transient events like action potentials in the human language system support memory at these different temporal scales? Here we investigate the nature of processing memory in a neurobiologically motivated model of sentence comprehension. The model was a recurrent, sparsely connected network of spiking neurons. Synaptic weights were created randomly and there was no adaptation or learning. As input the network received word sequences generated from construction grammar templates and their syntactic alternations (e.g., active/passive transitives, transfer datives, caused motion). The language environment had various features such as tense, aspect, noun/verb number agreement, and pronouns which created positional variation in the input. Similar to natural speech, word durations varied between 50ms and 0.5s of real, physical time depending on their length. The model's task was to incrementally interpret these word sequences in terms of semantic roles. There were 8 target roles (e.g., Agent, Patient, Recipient) and the language generated roughly 1,2m distinct utterances from which a sequence of 10,000 words was randomly selected and filtered through the network. A set of readout neurons was then calibrated by means of logistic regression to decode the internal network dynamics onto the target semantic roles. In order to accomplish the role assignment task, network states had to encode and maintain past information from multiple cues that could occur several words apart. To probe the circuit's memory capacity, we compared models where network connectivity, the shape of synaptic currents, and properties of neuronal adaptation were systematically manipulated. We found that task-relevant memory could be derived from a mechanism of neuronal spike-rate adaptation, modelled as a conductance that hyperpolarized the membrane following a spike and relaxed to baseline exponentially with a fixed time-constant. By acting directly on the membrane potential it provided processing memory that allowed the system to successfully interpret its sentence input. Near optimal performance was also observed when an exponential decay model of post-synaptic currents was added into the circuit, with time-constants approximating excitatory NMDA and inhibitory GABA-B receptor dynamics. Thus, the information flow was extended over time, creating memory characteristics comparable to spike-rate adaptation. Recurrent connectivity, in contrast, only played a limited role in maintaining information; an acyclic version of the recurrent circuit achieved similar accuracy. This indicates that random recurrent connectivity at the modelled spatial scale did not contribute additional processing memory to the task. Taken together, these results suggest that memory for language might be provided by activity-silent dynamic processes rather than the active replay of past input as in storage-and-retrieval models of working memory. Furthermore, memory in biological networks can take multiple forms on a continuum of time-scales. Therefore, the development of neurobiologically realistic, causal models will be critical for our understanding of the role of memory in language processing.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2016). Bilbao, Spain. 2016-09-01 - 2016-09-03.

    Abstract

    Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, UK
  • Petersson, K. M. (2016). Language & the brain, science for everyone. Talk presented at the University of Algarve. Faro, Portugal. 2016.
  • Petersson, K. M. (2016). Neurobiology of Language. Talk presented at the Center for Biomedical Research. Faro, Portugal. 2016.
  • Udden, J., Hulten, A., Schoffelen, J.-M., Lam, N., Kempen, G., Petersson, K. M., & Hagoort, P. (2016). Dynamics of supramodal unification processes during sentence comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    It is generally assumed that structure building processes in the spoken and written modalities are subserved by modality-independent lexical, morphological, grammatical, and conceptual processes. We present a large-scale neuroimaging study (N=204) on whether the unification of sentence structure is supramodal in this sense, testing if observations replicate across written and spoken sentence materials. The activity in the unification network should increase when it is presented with a challenging sentence structure, irrespective of the input modality. We build on the well-established findings that multiple non-local dependencies, overlapping in time, are challenging and that language users disprefer left- over right-branching sentence structures in written and spoken language, at least in the context of mainly right-branching languages such as English and Dutch. We thus focused our study with Dutch participants on a left-branching processing complexity measure. Supramodal effects of left-branching complexity were observed in a left-lateralized perisylvian network. The left inferior frontal gyrus (LIFG) and the left posterior middle temporal gyrus (LpMTG) were most clearly associated with left-branching processing complexity. The left anterior middle temporal gyrus (LaMTG) and left inferior parietal lobe (LIPL) were also significant, although less specifically. The LaMTG was increasingly active also for sentences with increasing right-branching processing complexity. A direct comparison between left- and right-branching processing complexity yielded activity in an LIFG ROI for left > right-branching complexity, while the right > left contrast showed no activation. Using a linear contrast testing for increases in the left-branching complexity effect over the sentence, we found significant activity in LIFG and LpMTG. In other words, the activity in these regions increased from sentence onset to end, in parallel with the increase of the left-branching complexity measure. No similar increase was observed in LIPL. Thus, the observed functional segregation during sentence processing of LaMTG and LIPL vs. LIFG and LpMTG is consistent with our observation of differential activation changes in sensitivity to left- vs. right-branching structure. While LIFG, LpMTG, LaMTG and LIPL all contribute to the supramodal unification processes, the results suggest that these regions differ in their respective contributions to the subprocesses of unification. Our results speak to the high processing costs of (1) simultaneous unification and (2) maintenance of constituents that are not yet attached to the already unified part of the sentence. Sentences with high left- (compared to right-) branching complexity impose an added load on unification. We show that this added load leads to an increased BOLD response in left perisylvian regions. The results are relevant for understanding the neural underpinnings of the processing difficulty linked to multiple, overlapping non-local dependencies. In conclusion, we used the left- and right branching complexity measures to index this processing difficulty and showed that the unification network operates with similar spatiotemporal dynamics over the course of the sentence, during unification of both written and spoken sentences.
  • Uhlmann, M., Tsoukala, C., Van de Broek, D., Fitz, H., & Petersson, K. M. (2016). Dealing with the problem of two: Temporal binding in sentence understanding with neural networks. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
  • Van den Broek, D., Uhlmann, M., Fitz, H., Hagoort, P., & Petersson, K. M. (2016). Spiking neural networks for semantic processing. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
  • Fitz, H., Hagoort, P., & Petersson, K. M. (2014). A spiking recurrent neural network for semantic processing. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Sentence processing requires the ability to establish thematic relations between constituents. Here we investigate the computational basis of this ability in a neurobiologically motivated comprehension model. The model has a tripartite architecture where input representations are supplied by the mental lexicon to a network that performs incremental thematic role assignment. Roles are combined into a representation of sentence-level meaning by a downstream system (semantic unification). Recurrent, sparsely connected, spiking networks were used which project a time-varying input signal (word sequences) into a high-dimensional, spatio-temporal pattern of activations. Local, adaptive linear read-out units were then calibrated to map the internal dynamics to desired output (thematic role sequences) [1]. Read-outs were adjusted on network dynamics driven by input sequences drawn from argument-structure templates with small variation in function words and larger variation in content words. Models were trained on sequences of 10K words for 200ms per word at a 1ms resolution, and tested on novel items generated from the language. We found that a static, random recurrent spiking network outperformed models that used only local word information without context. To improve performance, we explored various ways of increasing the model’s processing memory (e.g., network size, time constants, sparseness, input strength, etc.) and employed spiking neurons with more dynamic variables (leaky integrate-and-fire versus Izhikevich-neurons). The largest gain was observed when the model’s input history was extended to include previous words and/or roles. Model behavior was also compared for localist and distributed encodings of word sequences. The latter were obtained by compressing lexical co-occurrence statistics into continuous-valued vectors [2]. We found that performance for localist-input was superior even though distributed representations contained extra information about word context and semantic similarity. Finally, we compared models that received input enriched with combinations of semantic features, word-category, and verb sub-categorization labels. Counter-intuitively, we found that adding this information to the model’s lexical input did not further improve performance. Consistent with previous results, however, performance improved for increased variability in content words [3]. This indicates that the approach to comprehension taken here might scale to more diverse and naturalistic language input. Overall, the results suggest that active processing memory beyond pure state-dependent effects is important for sentence interpretation, and that memory in neurobiological systems might be actively computing [4]. Future work therefore needs to address how the structure of word representations interacts with enhanced processing memory in adaptive spiking networks. [1] Maass W., Natschläger T., & Markram H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14: 2531-2560. [2] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word represen-tations in vector space. Proceedings of the International Conference on Learning Represen-tations, Scottsdale/AZ. [3] Fitz, H. (2011). A liquid-state model of variability effects in learning nonadjacent dependencies. Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Austin/TX. [4] Petersson, K.M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets. Philo-sophical Transactions of the Royal Society B 367: 1971-1883.
  • Folia, V., Hagoort, P., & Petersson, K. M. (2014). An FMRI study of the interaction between sentence-level syntax and semantics during language comprehension. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Hagoort [1] suggested that the posterior temporal cortex is involved in the retrieval of lexical frames that form building blocks for syntactic unification, supported by the inferior frontal gyrus (IFG). FMRI results support the role of the IFG in the unification operations that are performed at the structural/syntactic [2] and conceptual/ semantic levels [3]. While these studies tackle the unification operations within linguistic components, in the present event-related FMRI study we investigated the interplay between sentence-level semantics and syntax by adapting an EEG comprehension paradigm [4]. The ERP results showed typical P600 and N400 effects, while their combined effect revealed an interaction expressed in the N400 component ([CB-SE] - [SY-CR] > 0). Although the N400 component was similar in the correct and syntactic conditions (SY  CR), the combined effect was significantly larger than the effect of semantic anomaly alone. In contrast, the size of the P600 effect was not affected by an additional semantic violation, suggesting an asymmetry between semantic and syntactic processing. In the current FMRI study we characterize this asymmetry by means of a 2x2 experimental design included the conditions: correct (CR), syntactic (SY), semantic (SE), and combined (CB) anomalies. Standard SPM procedures were used for analysis and only clusters significant at P <.05 family-wise error corrected are reported. The main effect of semantic anomaly ([CB+SE] > [SY+CR]) yielded activation in the anterior IFG (BA 45/47). The opposite contrast revealed the theory-ofmind and default-mode network. The main effect of syntactically correct sentences ([SE+CR] > [CB+SY]), showed significant activation in the IFG (BA 44/45), including the mid-anterior insula extending into the superior temporal poles (BA 22/38). In addition, significant effects were observed in medial prefrontal/ anterior cingulate cortex, posterior middle and superior temporal regions (BA 21/22), and the basal ganglia. The reverse contrast yielded activations in the MFG (BA 9/46), the inferior parietal region (BA 39/40), precuneus and the posterior cingulate region. The only region that showed a significant interaction ([CBSE]  [SYCR] > 0) was the left temporo-parietal region (BA 22/39/40). In summary, the results show that the IFG is involved in unification during comprehension. The effect of semantic anomaly and its implied unification load engages the anterior IFG while the effect of syntactic anomaly and its implied unification failure engages MFG. Finally, the results suggest that the syntax of gender agreement interacts with sentence-level semantics in the left temporo-parietal region. [1] Hagoort, P. (2005). On Broca, brain, and binding: A new framework. TICS, 9, 416-423. [2] Snijders, T. M., Vosse, T., Kempen, G., Van Berkum, J. J. A., Petersson, K. M., Hagoort, P. (2009). Retrieval and unification of syntactic structure in sentence comprehension: An fMRI study using word-category ambiguity. Cerebral Cortex, 19, 1493-1503. doi:10.1093/ cercor/bhn187. [3] Hagoort, P., Hald, L., Baastiansen, M., Petersson, K.M. (2004). Integration of word meaning and world knowledge in language comprehension. Science 304, 438-441. [4] Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15, 883- 899.
  • Fonteijn, H. M., Acheson, D. J., Petersson, K. M., Segaert, K., Snijders, T. M., Udden, J., Willems, R. M., & Hagoort, P. (2014). Overlap and segregation in activation for syntax and semantics: a meta-analysis of 13 fMRI studies. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Petersson, K. M., Folia, S. S. V., Sousa, A.-C., & Hagoort, P. (2014). Implicit structured sequence learning: An EEG study of the structural mere-exposure effect. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
  • Udden, J., Hulten, A., Fonteijn, H. M., Petersson, K. M., & Hagoort, P. (2014). The middle temporal and inferior parietal cortex contributions to inferior frontal unification across complex sentences. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.

Share this page