Publications

Displaying 1 - 23 of 23
  • Andics, A., McQueen, J. M., & Petersson, K. M. (2013). Mean-based neural coding of voices. NeuroImage, 79, 351-360. doi:10.1016/j.neuroimage.2013.05.002.

    Abstract

    The social significance of recognizing the person who talks to us is obvious, but the neural mechanisms that mediate talker identification are unclear. Regions along the bilateral superior temporal sulcus (STS) and the inferior frontal cortex (IFC) of the human brain are selective for voices, and they are sensitive to rapid voice changes. Although it has been proposed that voice recognition is supported by prototype-centered voice representations, the involvement of these category-selective cortical regions in the neural coding of such "mean voices" has not previously been demonstrated. Using fMRI in combination with a voice identity learning paradigm, we show that voice-selective regions are involved in the mean-based coding of voice identities. Voice typicality is encoded on a supra-individual level in the right STS along a stimulus-dependent, identity-independent (i.e., voice-acoustic) dimension, and on an intra-individual level in the right IFC along a stimulus-independent, identity-dependent (i.e., voice identity) dimension. Voice recognition therefore entails at least two anatomically separable stages, each characterized by neural mechanisms that reference the central tendencies of voice categories.
  • Kristensen, L. B., Wang, L., Petersson, K. M., & Hagoort, P. (2013). The interface between language and attention: Prosodic focus marking recruits a general attention network in spoken language comprehension. Cerebral Cortex, 23, 1836-1848. doi:10.1093/cercor/bhs164.

    Abstract

    In spoken language, pitch accent can mark certain information as focus, whereby more attentional resources are allocated to the focused information. Using functional magnetic resonance imaging, this study examined whether pitch accent, used for marking focus, recruited general attention networks during sentence comprehension. In a language task, we independently manipulated the prosody and semantic/pragmatic congruence of sentences. We found that semantic/pragmatic processing affected bilateral inferior and middle frontal gyrus. The prosody manipulation showed bilateral involvement of the superior/inferior parietal cortex, superior and middle temporal cortex, as well as inferior, middle, and posterior parts of the frontal cortex. We compared these regions with attention networks localized in an auditory spatial attention task. Both tasks activated bilateral superior/inferior parietal cortex, superior temporal cortex, and left precentral cortex. Furthermore, an interaction between prosody and congruence was observed in bilateral inferior parietal regions: for incongruent sentences, but not for congruent ones, there was a larger activation if the incongruent word carried a pitch accent, than if it did not. The common activations between the language task and the spatial attention task demonstrate that pitch accent activates a domain general attention network, which is sensitive to semantic/pragmatic aspects of language. Therefore, attention and language comprehension are highly interactive.

    Additional information

    Kirstensen_Cer_Cor_Suppl_Mat.doc
  • Nieuwenhuis, I. L., Folia, V., Forkstam, C., Jensen, O., & Petersson, K. M. (2013). Sleep promotes the extraction of grammatical rules. PLoS One, 8(6): e65046. doi:10.1371/journal.pone.0065046.

    Abstract

    Grammar acquisition is a high level cognitive function that requires the extraction of complex rules. While it has been proposed that offline time might benefit this type of rule extraction, this remains to be tested. Here, we addressed this question using an artificial grammar learning paradigm. During a short-term memory cover task, eighty-one human participants were exposed to letter sequences generated according to an unknown artificial grammar. Following a time delay of 15 min, 12 h (wake or sleep) or 24 h, participants classified novel test sequences as Grammatical or Non-Grammatical. Previous behavioral and functional neuroimaging work has shown that classification can be guided by two distinct underlying processes: (1) the holistic abstraction of the underlying grammar rules and (2) the detection of sequence chunks that appear at varying frequencies during exposure. Here, we show that classification performance improved after sleep. Moreover, this improvement was due to an enhancement of rule abstraction, while the effect of chunk frequency was unaltered by sleep. These findings suggest that sleep plays a critical role in extracting complex structure from separate but related items during integrative memory processing. Our findings stress the importance of alternating periods of learning with sleep in settings in which complex information must be acquired.
  • Segaert, K., Kempen, G., Petersson, K. M., & Hagoort, P. (2013). Syntactic priming and the lexical boost effect during sentence production and sentence comprehension: An fMRI study. Brain and Language, 124, 174-183. doi:10.1016/j.bandl.2012.12.003.

    Abstract

    Behavioral syntactic priming effects during sentence comprehension are typically observed only if both the syntactic structure and lexical head are repeated. In contrast, during production syntactic priming occurs with structure repetition alone, but the effect is boosted by repetition of the lexical head. We used fMRI to investigate the neuronal correlates of syntactic priming and lexical boost effects during sentence production and comprehension. The critical measure was the magnitude of fMRI adaptation to repetition of sentences in active or passive voice, with or without verb repetition. In conditions with repeated verbs, we observed adaptation to structure repetition in the left IFG and MTG, for active and passive voice. However, in the absence of repeated verbs, adaptation occurred only for passive sentences. None of the fMRI adaptation effects yielded differential effects for production versus comprehension, suggesting that sentence comprehension and production are subserved by the same neuronal infrastructure for syntactic processing.

    Additional information

    Segaert_Supplementary_data_2013.docx
  • Segaert, K., Weber, K., De Lange, F., Petersson, K. M., & Hagoort, P. (2013). The suppression of repetition enhancement: A review of fMRI studies. Neuropsychologia, 51, 59-66. doi:10.1016/j.neuropsychologia.2012.11.006.

    Abstract

    Repetition suppression in fMRI studies is generally thought to underlie behavioural facilitation effects (i.e., priming) and it is often used to identify the neuronal representations associated with a stimulus. However, this pays little heed to the large number of repetition enhancement effects observed under similar conditions. In this review, we identify several cognitive variables biasing repetition effects in the BOLD response towards enhancement instead of suppression. These variables are stimulus recognition, learning, attention, expectation and explicit memory. We also evaluate which models can account for these repetition effects and come to the conclusion that there is no one single model that is able to embrace all repetition enhancement effects. Accumulation, novel network formation as well as predictive coding models can all explain subsets of repetition enhancement effects.
  • Whitmarsh, S., Udden, J., Barendregt, H., & Petersson, K. M. (2013). Mindfulness reduces habitual responding based on implicit knowledge: Evidence from artificial grammar learning. Consciousness and Cognition, (3), 833-845. doi:10.1016/j.concog.2013.05.007.

    Abstract

    Participants were unknowingly exposed to complex regularities in a working memory task. The existence of implicit knowledge was subsequently inferred from a preference for stimuli with similar grammatical regularities. Several affective traits have been shown to influence
    AGL performance positively, many of which are related to a tendency for automatic responding. We therefore tested whether the mindfulness trait predicted a reduction of grammatically congruent preferences, and used emotional primes to explore the influence of affect. Mindfulness was shown to correlate negatively with grammatically congruent responses. Negative primes were shown to result in faster and more negative evaluations.
    We conclude that grammatically congruent preference ratings rely on habitual responses, and that our findings provide empirical evidence for the non-reactive disposition of the mindfulness trait.
  • Araújo, S., Bramão, I., Faísca, L., Petersson, K. M., & Reis, A. (2012). Electrophysiological correlates of impaired reading in dyslexic pre-adolescent children. Brain and Cognition, 79, 79-88. doi:10.1016/j.bandc.2012.02.010.

    Abstract

    In this study, event related potentials (ERPs) were used to investigate the extent to which dyslexics (aged 9–13 years) differ from normally reading controls in early ERPs, which reflect prelexical orthographic processing, and in late ERPs, which reflect implicit phonological processing. The participants performed an implicit reading task, which was manipulated in terms of letter-specific processing, orthographic familiarity, and phonological structure. Comparing consonant- and symbol sequences, the results showed significant differences in the P1 and N1 waveforms in the control but not in the dyslexic group. The reduced P1 and N1 effects in pre-adolescent children with dyslexia suggest a lack of visual specialization for letter-processing. The P1 and N1 components were not sensitive to the familiar vs. less familiar orthographic sequence contrast. The amplitude of the later N320 component was larger for phonologically legal (pseudowords) compared to illegal (consonant sequences) items in both controls and dyslexics. However, the topographic differences showed that the controls were more left-lateralized than the dyslexics. We suggest that the development of the mechanisms that support literacy skills in dyslexics is both delayed and follows a non-normal developmental path. This contributes to the hemispheric differences observed and might reflect a compensatory mechanism in dyslexics.
  • Bramão, I., Francisco, A., Inácio, F., Faísca, L., Reis, A., & Petersson, K. M. (2012). Electrophysiological evidence for colour effects on the naming of colour diagnostic and noncolour diagnostic objects. Visual Cognition, 20, 1164-1185. doi:10.1080/13506285.2012.739215.

    Abstract

    In this study, we investigated the level of visual processing at which surface colour information improves the naming of colour diagnostic and noncolour diagnostic objects. Continuous electroencephalograms were recorded while participants performed a visual object naming task in which coloured and black-and-white versions of both types of objects were presented. The black-and-white and the colour presentations were compared in two groups of event-related potentials (ERPs): (1) The P1 and N1 components, indexing early visual processing; and (2) the N300 and N400 components, which index late visual processing. A colour effect was observed in the P1 and N1 components, for both colour and noncolour diagnostic objects. In addition, for colour diagnostic objects, a colour effect was observed in the N400 component. These results suggest that colour information is important for the naming of colour and noncolour diagnostic objects at different levels of visual processing. It thus appears that the visual system uses colour information, during naming of both object types, at early visual stages; however, for the colour diagnostic objects naming, colour information is also recruited during the late visual processing stages.
  • Bramão, I., Faísca, L., Petersson, K. M., & Reis, A. (2012). The contribution of color to object recognition. In I. Kypraios (Ed.), Advances in object recognition systems (pp. 73-88). Rijeka, Croatia: InTech. Retrieved from http://www.intechopen.com/books/advances-in-object-recognition-systems/the-contribution-of-color-in-object-recognition.

    Abstract

    The cognitive processes involved in object recognition remain a mystery to the cognitive
    sciences. We know that the visual system recognizes objects via multiple features, including
    shape, color, texture, and motion characteristics. However, the way these features are
    combined to recognize objects is still an open question. The purpose of this contribution is to
    review the research about the specific role of color information in object recognition. Given
    that the human brain incorporates specialized mechanisms to handle color perception in the
    visual environment, it is a fair question to ask what functional role color might play in
    everyday vision.
  • Bramão, I., Faísca, L., Forkstam, C., Inácio, F., Araújo, S., Petersson, K. M., & Reis, A. (2012). The interaction between surface color and color knowledge: Behavioral and electrophysiological evidence. Brain and Cognition, 78, 28-37. doi:10.1016/j.bandc.2011.10.004.

    Abstract

    In this study, we used event-related potentials (ERPs) to evaluate the contribution of surface color and color knowledge information in object identification. We constructed two color-object verification tasks – a surface and a knowledge verification task – using high color diagnostic objects; both typical and atypical color versions of the same object were presented. Continuous electroencephalogram was recorded from 26 subjects. A cluster randomization procedure was used to explore the differences between typical and atypical color objects in each task. In the color knowledge task, we found two significant clusters that were consistent with the N350 and late positive complex (LPC) effects. Atypical color objects elicited more negative ERPs compared to typical color objects. The color effect found in the N350 time window suggests that surface color is an important cue that facilitates the selection of a stored object representation from long-term memory. Moreover, the observed LPC effect suggests that surface color activates associated semantic knowledge about the object, including color knowledge representations. We did not find any significant differences between typical and atypical color objects in the surface color verification task, which indicates that there is little contribution of color knowledge to resolve the surface color verification. Our main results suggest that surface color is an important visual cue that triggers color knowledge, thereby facilitating object identification.
  • Menenti, L., Petersson, K. M., & Hagoort, P. (2012). From reference to sense: How the brain encodes meaning for speaking. Frontiers in Psychology, 2, 384. doi:10.3389/fpsyg.2011.00384.

    Abstract

    In speaking, semantic encoding is the conversion of a non-verbal mental representation (the reference) into a semantic structure suitable for expression (the sense). In this fMRI study on sentence production we investigate how the speaking brain accomplishes this transition from non-verbal to verbal representations. In an overt picture description task, we manipulated repetition of sense (the semantic structure of the sentence) and reference (the described situation) separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these two components of semantic encoding. We also performed a control experiment with the same stimuli and design but without any linguistic task to identify areas involved in perception of the stimuli per se. The bilateral inferior parietal lobes were selectively sensitive to repetition of reference, while left inferior frontal gyrus showed selective suppression to repetition of sense. Strikingly, a widespread network of areas associated with language processing (left middle frontal gyrus, bilateral superior parietal lobes and bilateral posterior temporal gyri) all showed repetition suppression to both sense and reference processing. These areas are probably involved in mapping reference onto sense, the crucial step in semantic encoding. These results enable us to track the transition from non-verbal to verbal representations in our brains.
  • Petersson, K. M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets [Review article]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 1971-1883. doi:10.1098/rstb.2012.0101.

    Abstract

    The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.
  • Petersson, K. M., Folia, V., & Hagoort, P. (2012). What artificial grammar learning reveals about the neurobiology of syntax. Brain and Language, 120, 83-95. doi:10.1016/j.bandl.2010.08.003.

    Abstract

    In this paper we examine the neurobiological correlates of syntax, the processing of structured sequences, by comparing FMRI results on artificial and natural language syntax. We discuss these and similar findings in the context of formal language and computability theory. We used a simple right-linear unification grammar in an implicit artificial grammar learning paradigm in 32 healthy Dutch university students (natural language FMRI data were already acquired for these participants). We predicted that artificial syntax processing would engage the left inferior frontal region (BA 44/45) and that this activation would overlap with syntax-related variability observed in the natural language experiment. The main findings of this study show that the left inferior frontal region centered on BA 44/45 is active during artificial syntax processing of well-formed (grammatical) sequence independent of local subsequence familiarity. The same region is engaged to a greater extent when a syntactic violation is present and structural unification becomes difficult or impossible. The effects related to artificial syntax in the left inferior frontal region (BA 44/45) were essentially identical when we masked these with activity related to natural syntax in the same subjects. Finally, the medial temporal lobe was deactivated during this operation, consistent with the view that implicit processing does not rely on declarative memory mechanisms that engage the medial temporal lobe. In the context of recent FMRI findings, we raise the question whether Broca’s region (or subregions) is specifically related to syntactic movement operations or the processing of hierarchically nested non-adjacent dependencies in the discussion section. We conclude that this is not the case. Instead, we argue that the left inferior frontal region is a generic on-line sequence processor that unifies information from various sources in an incremental and recursive manner, independent of whether there are any processing requirements related to syntactic movement or hierarchically nested structures. In addition, we argue that the Chomsky hierarchy is not directly relevant for neurobiological systems.
  • Scheeringa, R., Petersson, K. M., Kleinschmidt, A., Jensen, O., & Bastiaansen, M. C. M. (2012). EEG alpha power modulation of fMRI resting state connectivity. Brain Connectivity, 2, 254-264. doi:10.1089/brain.2012.0088.

    Abstract

    In the past decade, the fast and transient coupling and uncoupling of functionally related brain regions into networks has received much attention in cognitive neuroscience. Empirical tools to study network coupling include fMRI-based functional and/or effective connectivity, and EEG/MEG-based measures of neuronal synchronization. Here we use simultaneously recorded EEG and fMRI to assess whether fMRI-based BOLD connectivity and frequency-specific EEG power are related. Using data collected during resting state, we studied whether posterior EEG alpha power fluctuations are correlated with connectivity within the visual network and between visual cortex and the rest of the brain. The results show that when alpha power increases BOLD connectivity between primary visual cortex and occipital brain regions decreases and that the negative relation of the visual cortex with anterior/medial thalamus decreases and ventral-medial prefrontal cortex is reduced in strength. These effects were specific for the alpha band, and not observed in other frequency bands. Decreased connectivity within the visual system may indicate enhanced functional inhibition during higher alpha activity. This higher inhibition level also attenuates long-range intrinsic functional antagonism between visual cortex and other thalamic and cortical regions. Together, these results illustrate that power fluctuations in posterior alpha oscillations result in local and long range neural connectivity changes.
  • Segaert, K., Menenti, L., Weber, K., Petersson, K. M., & Hagoort, P. (2012). Shared syntax in language production and language comprehension — An fMRI study. Cerebral Cortex, 22, 1662-1670. doi:10.1093/cercor/bhr249.

    Abstract

    During speaking and listening syntactic processing is a crucial step. It involves specifying syntactic relations between words in a sentence. If the production and comprehension modality share the neuronal substrate for syntactic processing then processing syntax in one modality should lead to adaptation effects in the other modality. In the present functional magnetic resonance imaging experiment, participants either overtly produced or heard descriptions of pictures. We looked for brain regions showing adaptation effects to the repetition of syntactic structures. In order to ensure that not just the same brain regions but also the same neuronal populations within these regions are involved in syntactic processing in speaking and listening, we compared syntactic adaptation effects within processing modalities (syntactic production-to-production and comprehension-to-comprehension priming) with syntactic adaptation effects between processing modalities (syntactic comprehension-to-production and production-to-comprehension priming). We found syntactic adaptation effects in left inferior frontal gyrus (Brodmann's area [BA] 45), left middle temporal gyrus (BA 21), and bilateral supplementary motor area (BA 6) which were equally strong within and between processing modalities. Thus, syntactic repetition facilitates syntactic processing in the brain within and across processing modalities to the same extent. We conclude that that the same neurobiological system seems to subserve syntactic processing in speaking and listening.
  • Silva, C., Faísca, L., Ingvar, M., Petersson, K. M., & Reis, A. (2012). Literacy: Exploring working memory systems. Journal of Clinical and Experimental Neuropsychology, 34(4), 369-377. doi:10.1080/13803395.2011.645017.

    Abstract

    Previous research showed an important association between reading and writing skills (literacy) and the phonological loop. However, the effects of literacy on other working memory components remain unclear. In this study, we investigated performance of illiterate subjects and their matched literate controls on verbal and nonverbal working memory tasks. Results revealed that the phonological loop is significantly influenced by literacy, while the visuospatial sketchpad appears to be less affected or not at all. Results also suggest that the central executive might be influenced by literacy, possibly as an expression of cognitive reserve.

    Files private

    Request files
  • Udden, J., Ingvar, M., Hagoort, P., & Petersson, K. M. (2012). Implicit acquisition of grammars with crossed and nested non-adjacent dependencies: Investigating the push-down stack model. Cognitive Science, 36, 1078-1101. doi:10.1111/j.1551-6709.2012.01235.x.

    Abstract

    A recent hypothesis in empirical brain research on language is that the fundamental difference between animal and human communication systems is captured by the distinction between finite-state and more complex phrase-structure grammars, such as context-free and context-sensitive grammars. However, the relevance of this distinction for the study of language as a neurobiological system has been questioned and it has been suggested that a more relevant and partly analogous distinction is that between non-adjacent and adjacent dependencies. Online memory resources are central to the processing of non-adjacent dependencies as information has to be maintained across intervening material. One proposal is that an external memory device in the form of a limited push-down stack is used to process non-adjacent dependencies. We tested this hypothesis in an artificial grammar learning paradigm where subjects acquired non-adjacent dependencies implicitly. Generally, we found no qualitative differences between the acquisition of non-adjacent dependencies and adjacent dependencies. This suggests that although the acquisition of non-adjacent dependencies requires more exposure to the acquisition material, it utilizes the same mechanisms used for acquiring adjacent dependencies. We challenge the push-down stack model further by testing its processing predictions for nested and crossed multiple non-adjacent dependencies. The push-down stack model is partly supported by the results, and we suggest that stack-like properties are some among many natural properties characterizing the underlying neurophysiological mechanisms that implement the online memory resources used in language and structured sequence processing.
  • De Vries, M. H., Petersson, K. M., Geukes, S., Zwitserlood, P., & Christiansen, M. H. (2012). Processing multiple non-adjacent dependencies: Evidence from sequence learning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 2065-2076. doi:10.1098/rstb.2011.0414.

    Abstract

    Processing non-adjacent dependencies is considered to be one of the hallmarks of human language. Assuming that sequence-learning tasks provide a useful way to tap natural-language-processing mechanisms, we cross-modally combined serial reaction time and artificial-grammar learning paradigms to investigate the processing of multiple nested (A1A2A3B3B2B1) and crossed dependencies (A1A2A3B1B2B3), containing either three or two dependencies. Both reaction times and prediction errors highlighted problems with processing the middle dependency in nested structures (A1A2A3B3_B1), reminiscent of the ‘missing-verb effect’ observed in English and French, but not with crossed structures (A1A2A3B1_B3). Prior linguistic experience did not play a major role: native speakers of German and Dutch—which permit nested and crossed dependencies, respectively—showed a similar pattern of results for sequences with three dependencies. As for sequences with two dependencies, reaction times and prediction errors were similar for both nested and crossed dependencies. The results suggest that constraints on the processing of multiple non-adjacent dependencies are determined by the specific ordering of the non-adjacent dependencies (i.e. nested or crossed), as well as the number of non-adjacent dependencies to be resolved (i.e. two or three). Furthermore, these constraints may not be specific to language but instead derive from limitations on structured sequence learning.
  • Forkstam, C., & Petersson, K. M. (2005). Towards an explicit account of implicit learning. Current Opinion in Neurology, 18(4), 435-441.

    Abstract

    Purpose of review: The human brain supports acquisition mechanisms that can extract structural regularities implicitly from experience without the induction of an explicit model. Reber defined the process by which an individual comes to respond appropriately to the statistical structure of the input ensemble as implicit learning. He argued that the capacity to generalize to new input is based on the acquisition of abstract representations that reflect underlying structural regularities in the acquisition input. We focus this review of the implicit learning literature on studies published during 2004 and 2005. We will not review studies of repetition priming ('implicit memory'). Instead we focus on two commonly used experimental paradigms: the serial reaction time task and artificial grammar learning. Previous comprehensive reviews can be found in Seger's 1994 article and the Handbook of Implicit Learning. Recent findings: Emerging themes include the interaction between implicit and explicit processes, the role of the medial temporal lobe, developmental aspects of implicit learning, age-dependence, the role of sleep and consolidation. Summary: The attempts to characterize the interaction between implicit and explicit learning are promising although not well understood. The same can be said about the role of sleep and consolidation. Despite the fact that lesion studies have relatively consistently suggested that the medial temporal lobe memory system is not necessary for implicit learning, a number of functional magnetic resonance studies have reported medial temporal lobe activation in implicit learning. This issue merits further research. Finally, the clinical relevance of implicit learning remains to be determined.
  • Forkstam, C., & Petersson, K. M. (2005). Syntactic classification of acquired structural regularities. In G. B. Bruna, & L. Barsalou (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 696-701).

    Abstract

    In this paper we investigate the neural correlates of syntactic classification of an acquired grammatical sequence structure in an event-related FMRI study. During acquisition, participants were engaged in an implicit short-term memory task without performance feedback. We manipulated the statistical frequency-based and rule-based characteristics of the classification stimuli independently in order to investigate their role in artificial grammar acquisition. The participants performed reliably above chance on the classification task. We observed a partly overlapping corticostriatal processing network activated by both manipulations including inferior prefrontal, cingulate, inferior parietal regions, and the caudate nucleus. More specifically, the left inferior frontal BA 45 and the caudate nucleus were sensitive to syntactic violations and endorsement, respectively. In contrast, these structures were insensitive to the frequency-based manipulation.
  • Lundstrom, B. N., Ingvar, M., & Petersson, K. M. (2005). The role of precuneus and left inferior frontal cortex during source memory episodic retrieval. Neuroimage, 27, 824-834. doi:10.1016/j.neuroimage.2005.05.008.

    Abstract

    The posterior medial parietal cortex and left prefrontal cortex (PFC) have both been implicated in the recollection of past episodes. In a previous study, we found the posterior precuneus and left lateral inferior frontal cortex to be activated during episodic source memory retrieval. This study further examines the role of posterior precuneal and left prefrontal activation during episodic source memory retrieval using a similar source memory paradigm but with longer latency between encoding and retrieval. Our results suggest that both the precuneus and the left inferior PFC are important for regeneration of rich episodic contextual associations and that the precuneus activates in tandem with the left inferior PFC during correct source retrieval. Further, results suggest that the left ventro-lateral frontal region/ frontal operculum is involved in searching for task-relevant information (BA 47) and subsequent monitoring or scrutiny (BA 44/45) while regions in the dorsal inferior frontal cortex are important for information selection (BA 45/46).
  • Petersson, K. M., Grenholm, P., & Forkstam, C. (2005). Artificial grammar learning and neural networks. In G. B. Bruna, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 1726-1731).

    Abstract

    Recent FMRI studies indicate that language related brain regions are engaged in artificial grammar (AG) processing. In the present study we investigate the Reber grammar by means of formal analysis and network simulations. We outline a new method for describing the network dynamics and propose an approach to grammar extraction based on the state-space dynamics of the network. We conclude that statistical frequency-based and rule-based acquisition procedures can be viewed as complementary perspectives on grammar learning, and more generally, that classical cognitive models can be viewed as a special case of a dynamical systems perspective on information processing
  • Petersson, K. M. (2005). On the relevance of the neurobiological analogue of the finite-state architecture. Neurocomputing, 65(66), 825-832. doi:10.1016/j.neucom.2004.10.108.

    Abstract

    We present two simple arguments for the potential relevance of a neurobiological analogue of the finite-state architecture. The first assumes the classical cognitive framework, is wellknown, and is based on the assumption that the brain is finite with respect to its memory organization. The second is formulated within a general dynamical systems framework and is based on the assumption that the brain sustains some level of noise and/or does not utilize infinite precision processing. We briefly review the classical cognitive framework based on Church–Turing computability and non-classical approaches based on analog processing in dynamical systems. We conclude that the dynamical neurobiological analogue of the finitestate architecture appears to be relevant, at least at an implementational level, for cognitive brain systems

Share this page