Publications

Displaying 1 - 13 of 13
  • Quaresima, A., Fitz, H., Duarte, R., Van den Broek, D., Hagoort, P., & Petersson, K. M. (2023). The Tripod neuron: A minimal structural reduction of the dendritic tree. The Journal of Physiology, 601(15), 3007-3437. doi:10.1113/JP283399.

    Abstract

    Neuron models with explicit dendritic dynamics have shed light on mechanisms for coincidence detection, pathway selection and temporal filtering. However, it is still unclear which morphological and physiological features are required to capture these phenomena. In this work, we introduce the Tripod neuron model and propose a minimal structural reduction of the dendritic tree that is able to reproduce these computations. The Tripod is a three-compartment model consisting of two segregated passive dendrites and a somatic compartment modelled as an adaptive, exponential integrate-and-fire neuron. It incorporates dendritic geometry, membrane physiology and receptor dynamics as measured in human pyramidal cells. We characterize the response of the Tripod to glutamatergic and GABAergic inputs and identify parameters that support supra-linear integration, coincidence-detection and pathway-specific gating through shunting inhibition. Following NMDA spikes, the Tripod neuron generates plateau potentials whose duration depends on the dendritic length and the strength of synaptic input. When fitted with distal compartments, the Tripod encodes previous activity into a dendritic depolarized state. This dendritic memory allows the neuron to perform temporal binding, and we show that it solves transition and sequence detection tasks on which a single-compartment model fails. Thus, the Tripod can account for dendritic computations previously explained only with more detailed neuron models or neural networks. Due to its simplicity, the Tripod neuron can be used efficiently in simulations of larger cortical circuits.
  • Silva, S., Inácio, F., Rocha e Sousa, D., Gaspar, N., Folia, V., & Petersson, K. M. (2023). Formal language hierarchy reflects different levels of cognitive complexity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 49(4), 642-660. doi:10.1037/xlm0001182.

    Abstract

    Formal language hierarchy describes levels of increasing syntactic complexity (adjacent dependencies, nonadjacent nested, nonadjacent crossed) of which the transcription into a hierarchy of cognitive complexity remains under debate. The cognitive foundations of formal language hierarchy have been contradicted by two types of evidence: First, adjacent dependencies are not easier to learn compared to nonadjacent; second, crossed nonadjacent dependencies may be easier than nested. However, studies providing these findings may have engaged confounds: Repetition monitoring strategies may have accounted for participants’ high performance in nonadjacent dependencies, and linguistic experience may have accounted for the advantage of crossed dependencies. We conducted two artificial grammar learning experiments where we addressed these confounds by manipulating reliance on repetition monitoring and by testing participants inexperienced with crossed dependencies. Results showed relevant differences in learning adjacent versus nonadjacent dependencies and advantages of nested over crossed, suggesting that formal language hierarchy may indeed translate into a hierarchy of cognitive complexity
  • Andics, A., McQueen, J. M., & Petersson, K. M. (2013). Mean-based neural coding of voices. NeuroImage, 79, 351-360. doi:10.1016/j.neuroimage.2013.05.002.

    Abstract

    The social significance of recognizing the person who talks to us is obvious, but the neural mechanisms that mediate talker identification are unclear. Regions along the bilateral superior temporal sulcus (STS) and the inferior frontal cortex (IFC) of the human brain are selective for voices, and they are sensitive to rapid voice changes. Although it has been proposed that voice recognition is supported by prototype-centered voice representations, the involvement of these category-selective cortical regions in the neural coding of such "mean voices" has not previously been demonstrated. Using fMRI in combination with a voice identity learning paradigm, we show that voice-selective regions are involved in the mean-based coding of voice identities. Voice typicality is encoded on a supra-individual level in the right STS along a stimulus-dependent, identity-independent (i.e., voice-acoustic) dimension, and on an intra-individual level in the right IFC along a stimulus-independent, identity-dependent (i.e., voice identity) dimension. Voice recognition therefore entails at least two anatomically separable stages, each characterized by neural mechanisms that reference the central tendencies of voice categories.
  • Kristensen, L. B., Wang, L., Petersson, K. M., & Hagoort, P. (2013). The interface between language and attention: Prosodic focus marking recruits a general attention network in spoken language comprehension. Cerebral Cortex, 23, 1836-1848. doi:10.1093/cercor/bhs164.

    Abstract

    In spoken language, pitch accent can mark certain information as focus, whereby more attentional resources are allocated to the focused information. Using functional magnetic resonance imaging, this study examined whether pitch accent, used for marking focus, recruited general attention networks during sentence comprehension. In a language task, we independently manipulated the prosody and semantic/pragmatic congruence of sentences. We found that semantic/pragmatic processing affected bilateral inferior and middle frontal gyrus. The prosody manipulation showed bilateral involvement of the superior/inferior parietal cortex, superior and middle temporal cortex, as well as inferior, middle, and posterior parts of the frontal cortex. We compared these regions with attention networks localized in an auditory spatial attention task. Both tasks activated bilateral superior/inferior parietal cortex, superior temporal cortex, and left precentral cortex. Furthermore, an interaction between prosody and congruence was observed in bilateral inferior parietal regions: for incongruent sentences, but not for congruent ones, there was a larger activation if the incongruent word carried a pitch accent, than if it did not. The common activations between the language task and the spatial attention task demonstrate that pitch accent activates a domain general attention network, which is sensitive to semantic/pragmatic aspects of language. Therefore, attention and language comprehension are highly interactive.

    Additional information

    Kirstensen_Cer_Cor_Suppl_Mat.doc
  • Nieuwenhuis, I. L., Folia, V., Forkstam, C., Jensen, O., & Petersson, K. M. (2013). Sleep promotes the extraction of grammatical rules. PLoS One, 8(6): e65046. doi:10.1371/journal.pone.0065046.

    Abstract

    Grammar acquisition is a high level cognitive function that requires the extraction of complex rules. While it has been proposed that offline time might benefit this type of rule extraction, this remains to be tested. Here, we addressed this question using an artificial grammar learning paradigm. During a short-term memory cover task, eighty-one human participants were exposed to letter sequences generated according to an unknown artificial grammar. Following a time delay of 15 min, 12 h (wake or sleep) or 24 h, participants classified novel test sequences as Grammatical or Non-Grammatical. Previous behavioral and functional neuroimaging work has shown that classification can be guided by two distinct underlying processes: (1) the holistic abstraction of the underlying grammar rules and (2) the detection of sequence chunks that appear at varying frequencies during exposure. Here, we show that classification performance improved after sleep. Moreover, this improvement was due to an enhancement of rule abstraction, while the effect of chunk frequency was unaltered by sleep. These findings suggest that sleep plays a critical role in extracting complex structure from separate but related items during integrative memory processing. Our findings stress the importance of alternating periods of learning with sleep in settings in which complex information must be acquired.
  • Segaert, K., Kempen, G., Petersson, K. M., & Hagoort, P. (2013). Syntactic priming and the lexical boost effect during sentence production and sentence comprehension: An fMRI study. Brain and Language, 124, 174-183. doi:10.1016/j.bandl.2012.12.003.

    Abstract

    Behavioral syntactic priming effects during sentence comprehension are typically observed only if both the syntactic structure and lexical head are repeated. In contrast, during production syntactic priming occurs with structure repetition alone, but the effect is boosted by repetition of the lexical head. We used fMRI to investigate the neuronal correlates of syntactic priming and lexical boost effects during sentence production and comprehension. The critical measure was the magnitude of fMRI adaptation to repetition of sentences in active or passive voice, with or without verb repetition. In conditions with repeated verbs, we observed adaptation to structure repetition in the left IFG and MTG, for active and passive voice. However, in the absence of repeated verbs, adaptation occurred only for passive sentences. None of the fMRI adaptation effects yielded differential effects for production versus comprehension, suggesting that sentence comprehension and production are subserved by the same neuronal infrastructure for syntactic processing.

    Additional information

    Segaert_Supplementary_data_2013.docx
  • Segaert, K., Weber, K., De Lange, F., Petersson, K. M., & Hagoort, P. (2013). The suppression of repetition enhancement: A review of fMRI studies. Neuropsychologia, 51, 59-66. doi:10.1016/j.neuropsychologia.2012.11.006.

    Abstract

    Repetition suppression in fMRI studies is generally thought to underlie behavioural facilitation effects (i.e., priming) and it is often used to identify the neuronal representations associated with a stimulus. However, this pays little heed to the large number of repetition enhancement effects observed under similar conditions. In this review, we identify several cognitive variables biasing repetition effects in the BOLD response towards enhancement instead of suppression. These variables are stimulus recognition, learning, attention, expectation and explicit memory. We also evaluate which models can account for these repetition effects and come to the conclusion that there is no one single model that is able to embrace all repetition enhancement effects. Accumulation, novel network formation as well as predictive coding models can all explain subsets of repetition enhancement effects.
  • Whitmarsh, S., Udden, J., Barendregt, H., & Petersson, K. M. (2013). Mindfulness reduces habitual responding based on implicit knowledge: Evidence from artificial grammar learning. Consciousness and Cognition, (3), 833-845. doi:10.1016/j.concog.2013.05.007.

    Abstract

    Participants were unknowingly exposed to complex regularities in a working memory task. The existence of implicit knowledge was subsequently inferred from a preference for stimuli with similar grammatical regularities. Several affective traits have been shown to influence
    AGL performance positively, many of which are related to a tendency for automatic responding. We therefore tested whether the mindfulness trait predicted a reduction of grammatically congruent preferences, and used emotional primes to explore the influence of affect. Mindfulness was shown to correlate negatively with grammatically congruent responses. Negative primes were shown to result in faster and more negative evaluations.
    We conclude that grammatically congruent preference ratings rely on habitual responses, and that our findings provide empirical evidence for the non-reactive disposition of the mindfulness trait.
  • Forkstam, C., & Petersson, K. M. (2005). Towards an explicit account of implicit learning. Current Opinion in Neurology, 18(4), 435-441.

    Abstract

    Purpose of review: The human brain supports acquisition mechanisms that can extract structural regularities implicitly from experience without the induction of an explicit model. Reber defined the process by which an individual comes to respond appropriately to the statistical structure of the input ensemble as implicit learning. He argued that the capacity to generalize to new input is based on the acquisition of abstract representations that reflect underlying structural regularities in the acquisition input. We focus this review of the implicit learning literature on studies published during 2004 and 2005. We will not review studies of repetition priming ('implicit memory'). Instead we focus on two commonly used experimental paradigms: the serial reaction time task and artificial grammar learning. Previous comprehensive reviews can be found in Seger's 1994 article and the Handbook of Implicit Learning. Recent findings: Emerging themes include the interaction between implicit and explicit processes, the role of the medial temporal lobe, developmental aspects of implicit learning, age-dependence, the role of sleep and consolidation. Summary: The attempts to characterize the interaction between implicit and explicit learning are promising although not well understood. The same can be said about the role of sleep and consolidation. Despite the fact that lesion studies have relatively consistently suggested that the medial temporal lobe memory system is not necessary for implicit learning, a number of functional magnetic resonance studies have reported medial temporal lobe activation in implicit learning. This issue merits further research. Finally, the clinical relevance of implicit learning remains to be determined.
  • Forkstam, C., & Petersson, K. M. (2005). Syntactic classification of acquired structural regularities. In G. B. Bruna, & L. Barsalou (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 696-701).

    Abstract

    In this paper we investigate the neural correlates of syntactic classification of an acquired grammatical sequence structure in an event-related FMRI study. During acquisition, participants were engaged in an implicit short-term memory task without performance feedback. We manipulated the statistical frequency-based and rule-based characteristics of the classification stimuli independently in order to investigate their role in artificial grammar acquisition. The participants performed reliably above chance on the classification task. We observed a partly overlapping corticostriatal processing network activated by both manipulations including inferior prefrontal, cingulate, inferior parietal regions, and the caudate nucleus. More specifically, the left inferior frontal BA 45 and the caudate nucleus were sensitive to syntactic violations and endorsement, respectively. In contrast, these structures were insensitive to the frequency-based manipulation.
  • Lundstrom, B. N., Ingvar, M., & Petersson, K. M. (2005). The role of precuneus and left inferior frontal cortex during source memory episodic retrieval. Neuroimage, 27, 824-834. doi:10.1016/j.neuroimage.2005.05.008.

    Abstract

    The posterior medial parietal cortex and left prefrontal cortex (PFC) have both been implicated in the recollection of past episodes. In a previous study, we found the posterior precuneus and left lateral inferior frontal cortex to be activated during episodic source memory retrieval. This study further examines the role of posterior precuneal and left prefrontal activation during episodic source memory retrieval using a similar source memory paradigm but with longer latency between encoding and retrieval. Our results suggest that both the precuneus and the left inferior PFC are important for regeneration of rich episodic contextual associations and that the precuneus activates in tandem with the left inferior PFC during correct source retrieval. Further, results suggest that the left ventro-lateral frontal region/ frontal operculum is involved in searching for task-relevant information (BA 47) and subsequent monitoring or scrutiny (BA 44/45) while regions in the dorsal inferior frontal cortex are important for information selection (BA 45/46).
  • Petersson, K. M., Grenholm, P., & Forkstam, C. (2005). Artificial grammar learning and neural networks. In G. B. Bruna, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 1726-1731).

    Abstract

    Recent FMRI studies indicate that language related brain regions are engaged in artificial grammar (AG) processing. In the present study we investigate the Reber grammar by means of formal analysis and network simulations. We outline a new method for describing the network dynamics and propose an approach to grammar extraction based on the state-space dynamics of the network. We conclude that statistical frequency-based and rule-based acquisition procedures can be viewed as complementary perspectives on grammar learning, and more generally, that classical cognitive models can be viewed as a special case of a dynamical systems perspective on information processing
  • Petersson, K. M. (2005). On the relevance of the neurobiological analogue of the finite-state architecture. Neurocomputing, 65(66), 825-832. doi:10.1016/j.neucom.2004.10.108.

    Abstract

    We present two simple arguments for the potential relevance of a neurobiological analogue of the finite-state architecture. The first assumes the classical cognitive framework, is wellknown, and is based on the assumption that the brain is finite with respect to its memory organization. The second is formulated within a general dynamical systems framework and is based on the assumption that the brain sustains some level of noise and/or does not utilize infinite precision processing. We briefly review the classical cognitive framework based on Church–Turing computability and non-classical approaches based on analog processing in dynamical systems. We conclude that the dynamical neurobiological analogue of the finitestate architecture appears to be relevant, at least at an implementational level, for cognitive brain systems

Share this page