Publications

Displaying 1 - 96 of 96
  • Adank, P., Hagoort, P., & Bekkering, H. (2010). Imitation improves language comprehension. Psychological Science, 21, 1903-1909. doi:10.1177/0956797610389192.

    Abstract

    Humans imitate each other during social interaction. This imitative behavior streamlines social interaction and aids in learning to replicate actions. However, the effect of imitation on action comprehension is unclear. This study investigated whether vocal imitation of an unfamiliar accent improved spoken-language comprehension. Following a pretraining accent comprehension test, participants were assigned to one of six groups. The baseline group received no training, but participants in the other five groups listened to accented sentences, listened to and repeated accented sentences in their own accent, listened to and transcribed accented sentences, listened to and imitated accented sentences, or listened to and imitated accented sentences without being able to hear their own vocalizations. Posttraining measures showed that accent comprehension was most improved for participants who imitated the speaker’s accent. These results show that imitation may aid in streamlining interaction by improving spoken-language comprehension under adverse listening conditions.
  • Andics, A., McQueen, J. M., Petersson, K. M., Gál, V., Rudas, G., & Vidnyánszky, Z. (2010). Neural mechanisms for voice recognition. NeuroImage, 52, 1528-1540. doi:10.1016/j.neuroimage.2010.05.048.

    Abstract

    We investigated neural mechanisms that support voice recognition in a training paradigm with fMRI. The same listeners were trained on different weeks to categorize the mid-regions of voice-morph continua as an individual's voice. Stimuli implicitly defined a voice-acoustics space, and training explicitly defined a voice-identity space. The predefined centre of the voice category was shifted from the acoustic centre each week in opposite directions, so the same stimuli had different training histories on different tests. Cortical sensitivity to voice similarity appeared over different time-scales and at different representational stages. First, there were short-term adaptation effects: Increasing acoustic similarity to the directly preceding stimulus led to haemodynamic response reduction in the middle/posterior STS and in right ventrolateral prefrontal regions. Second, there were longer-term effects: Response reduction was found in the orbital/insular cortex for stimuli that were most versus least similar to the acoustic mean of all preceding stimuli, and, in the anterior temporal pole, the deep posterior STS and the amygdala, for stimuli that were most versus least similar to the trained voice-identity category mean. These findings are interpreted as effects of neural sharpening of long-term stored typical acoustic and category-internal values. The analyses also reveal anatomically separable voice representations: one in a voice-acoustics space and one in a voice-identity space. Voice-identity representations flexibly followed the trained identity shift, and listeners with a greater identity effect were more accurate at recognizing familiar voices. Voice recognition is thus supported by neural voice spaces that are organized around flexible ‘mean voice’ representations.
  • Araújo, S., Pacheco, A., Faísca, L., Petersson, K. M., & Reis, A. (2010). Visual rapid naming and phonological abilities: Different subtypes in dyslexic children. International Journal of Psychology, 45, 443-452. doi:10.1080/00207594.2010.499949.

    Abstract

    One implication of the double-deficit hypothesis for dyslexia is that there should be subtypes of dyslexic readers that exhibit rapid naming deficits with or without concomitant phonological processing problems. In the current study, we investigated the validity of this hypothesis for Portuguese orthography, which is more consistent than English orthography, by exploring different cognitive profiles in a sample of dyslexic children. In particular, we were interested in identifying readers characterized by a pure rapid automatized naming deficit. We also examined whether rapid naming and phonological awareness independently account for individual differences in reading performance. We characterized the performance of dyslexic readers and a control group of normal readers matched for age on reading, visual rapid naming and phonological processing tasks. Our results suggest that there is a subgroup of dyslexic readers with intact phonological processing capacity (in terms of both accuracy and speed measures) but poor rapid naming skills. We also provide evidence for an independent association between rapid naming and reading competence in the dyslexic sample, when the effect of phonological skills was controlled. Altogether, the results are more consistent with the view that rapid naming problems in dyslexia represent a second core deficit rather than an exclusive phonological explanation for the rapid naming deficits. Furthermore, additional non-phonological processes, which subserve rapid naming performance, contribute independently to reading development.
  • Baggio, G., Choma, T., Van Lambalgen, M., & Hagoort, P. (2010). Coercion and compositionality. Journal of Cognitive Neuroscience, 22, 2131-2140. doi:10.1162/jocn.2009.21303.

    Abstract

    Research in psycholinguistics and in the cognitive neuroscience of language has suggested that semantic and syntactic integration are associated with different neurophysiologic correlates, such as the N400 and the P600 in the ERPs. However, only a handful of studies have investigated the neural basis of the syntax–semantics interface, and even fewer experiments have dealt with the cases in which semantic composition can proceed independently of the syntax. Here we looked into one such case—complement coercion—using ERPs. We compared sentences such as, “The journalist wrote the article” with “The journalist began the article.” The second sentence seems to involve a silent semantic element, which is expressed in the first sentence by the head of the VP “wrote the article.” The second type of construction may therefore require the reader to infer or recover from memory a richer event sense of the VP “began the article,” such as began writing the article, and to integrate that into a semantic representation of the sentence. This operation is referred to as “complement coercion.” Consistently with earlier reading time, eye tracking, and MEG studies, we found traces of such additional computations in the ERPs: Coercion gives rise to a long-lasting negative shift, which differs at least in duration from a standard N400 effect. Issues regarding the nature of the computation involved are discussed in the light of a neurocognitive model of language processing and a formal semantic analysis of coercion.
  • Bastiaansen, M. C. M., Magyari, L., & Hagoort, P. (2010). Syntactic unification operations are reflected in oscillatory dynamics during on-line sentence comprehension. Journal of Cognitive Neuroscience, 22, 1333-1347. doi:10.1162/jocn.2009.21283.

    Abstract

    There is growing evidence suggesting that synchronization changes in the oscillatory neuronal dynamics in the EEG or MEG reflect the transient coupling and uncoupling of functional networks related to different aspects of language comprehension. In this work, we examine how sentence-level syntactic unification operations are reflected in the oscillatory dynamics of the MEG. Participants read sentences that were either correct, contained a word category violation, or were constituted of random word sequences devoid of syntactic structure. A time-frequency analysis of MEG power changes revealed three types of effects. The first type of effect was related to the detection of a (word category) violation in a syntactically structured sentence, and was found in the alpha and gamma frequency bands. A second type of effect was maximally sensitive to the syntactic manipulations: A linear increase in beta power across the sentence was present for correct sentences, was disrupted upon the occurrence of a word category violation, and was absent in syntactically unstructured random word sequences. We therefore relate this effect to syntactic unification operations. Thirdly, we observed a linear increase in theta power across the sentence for all syntactically structured sentences. The effects are tentatively related to the building of a working memory trace of the linguistic input. In conclusion, the data seem to suggest that syntactic unification is reflected by neuronal synchronization in the lower-beta frequency band.
  • Bramão, I., Faísca, L., Forkstam, C., Reis, A., & Petersson, K. M. (2010). Cortical brain regions associated with color processing: An FMRI study. The Open Neuroimaging Journal, 4, 164-173. doi:10.2174/1874440001004010164.

    Abstract

    To clarify whether the neural pathways concerning color processing are the same for natural objects, for artifacts objects and for non-sense objects we examined functional magnetic resonance imaging (FMRI) responses during a covert naming task including the factors color (color vs. black&white (B&W)) and stimulus type (natural vs. artifacts vs. non-sense objects). Our results indicate that the superior parietal lobule and precuneus (BA 7) bilaterally, the right hippocampus and the right fusifom gyrus (V4) make part of a network responsible for color processing both for natural and artifacts objects, but not for non-sense objects. The recognition of non-sense colored objects compared to the recognition of color objects activated the posterior cingulate/precuneus (BA 7/23/31), suggesting that color attribute induces the mental operation of trying to associate a non-sense composition with a familiar objects. When color objects (both natural and artifacts) were contrasted with color nonobjects we observed activations in the right parahippocampal gyrus (BA 35/36), the superior parietal lobule (BA 7) bilaterally, the left inferior middle temporal region (BA 20/21) and the inferior and superior frontal regions (BA 10/11/47). These additional activations suggest that colored objects recruit brain regions that are related to visual semantic information/retrieval and brain regions related to visuo-spatial processing. Overall, the results suggest that color information is an attribute that improve object recognition (based on behavioral results) and activate a specific neural network related to visual semantic information that is more extensive than for B&W objects during object recognition
  • Bramão, I., Faísca, L., Petersson, K. M., & Reis, A. (2010). The influence of surface color information and color knowledge information in object recognition. American Journal of Psychology, 123, 437-466. Retrieved from http://www.jstor.org/stable/10.5406/amerjpsyc.123.4.0437.

    Abstract

    In order to clarify whether the influence of color knowledge information in object recognition depends on the presence of the appropriate surface color, we designed a name—object verification task. The relationship between color and shape information provided by the name and by the object photo was manipulated in order to assess color interference independently of shape interference. We tested three different versions for each object: typically colored, black and white, and nontypically colored. The response times on the nonmatching trials were used to measure the interference between the name and the photo. We predicted that the more similar the name and the photo are, the longer it would take to respond. Overall, the color similarity effect disappeared in the black-and-white and nontypical color conditions, suggesting that the influence of color knowledge on object recognition depends on the presence of the appropriate surface color information.
  • Casasanto, D., & Dijkstra, K. (2010). Motor action and emotional memory. Cognition, 115, 179-185. doi:10.1016/j.cognition.2009.11.002.

    Abstract

    Can simple motor actions affect how efficiently people retrieve emotional memories, and influence what they choose to remember? In Experiment 1, participants were prompted to retell autobiographical memories with either positive or negative valence, while moving marbles either upward or downward. They retrieved memories faster when the direction of movement was congruent with the valence of the memory (upward for positive, downward for negative memories). Given neutral-valence prompts in Experiment 2, participants retrieved more positive memories when instructed to move marbles up, and more negative memories when instructed to move them down, demonstrating a causal link from motion to emotion. Results suggest that positive and negative life experiences are implicitly associated with schematic representations of upward and downward motion, consistent with theories of metaphorical mental representation. Beyond influencing the efficiency of memory retrieval, the direction of irrelevant, repetitive motor actions can also partly determine the emotional content of the memories people retrieve: moving marbles upward (an ostensibly meaningless action) can cause people to think more positive thoughts.
  • Casasanto, D., & Jasmin, K. (2010). Good and bad in the hands of politicians: Spontaneous gestures during positive and negative speech. PLoS ONE, 5(7), E11805. doi:10.1371/journal.pone.0011805.

    Abstract

    According to the body-specificity hypothesis, people with different bodily characteristics should form correspondingly different mental representations, even in highly abstract conceptual domains. In a previous test of this proposal, right- and left-handers were found to associate positive ideas like intelligence, attractiveness, and honesty with their dominant side and negative ideas with their non-dominant side. The goal of the present study was to determine whether ‘body-specific’ associations of space and valence can be observed beyond the laboratory in spontaneous behavior, and whether these implicit associations have visible consequences.
  • Casasanto, D., Fotakopoulou, O., & Boroditsky, L. (2010). Space and time in the child's mind: Evidence for a cross-dimensional asymmetry. Cognitive Science, 34, 387 -405. doi:10.1111/j.1551-6709.2010.01094.x.

    Abstract

    What is the relationship between space and time in the human mind? Studies in adults show an asymmetric relationship between mental representations of these basic dimensions of experience: Representations of time depend on space more than representations of space depend on time. Here we investigated the relationship between space and time in the developing mind. Native Greek-speaking children watched movies of two animals traveling along parallel paths for different distances or durations and judged the spatial and temporal aspects of these events (e.g., Which animal went for a longer distance, or a longer time?). Results showed a reliable cross-dimensional asymmetry. For the same stimuli, spatial information influenced temporal judgments more than temporal information influenced spatial judgments. This pattern was robust to variations in the age of the participants and the type of linguistic framing used to elicit responses. This finding demonstrates a continuity between space-time representations in children and adults, and informs theories of analog magnitude representation.
  • Folia, V., Uddén, J., De Vries, M., Forkstam, C., & Petersson, K. M. (2010). Artificial language learning in adults and children. Language learning, 60(s2), 188-220. doi:10.1111/j.1467-9922.2010.00606.x.

    Abstract

    This article briefly reviews some recent work on artificial language learning in children and adults. The final part of the article is devoted to a theoretical formulation of the language learning problem from a mechanistic neurobiological viewpoint and we show that it is logically possible to combine the notion of innate language constraints with, for example, the notion of domain general learning mechanisms. A growing body of empirical evidence suggests that the mechanisms involved in artificial language learning and in structured sequence processing are shared with those of natural language acquisition and natural language processing. Finally, by theoretically analyzing a formal learning model, we highlight Fodor’s insight that it is logically possible to combine innate, domain-specific constraints with domain-general learning mechanisms.
  • Fournier, R., Gussenhoven, C., Jensen, O., & Hagoort, P. (2010). Lateralization of tonal and intonational pitch processing: An MEG study. Brain Research, 1328, 79-88. doi:10.1016/j.brainres.2010.02.053.

    Abstract

    An MEG experiment was carried out in order to compare the processing of lexical-tonal and intonational contrasts, based on the tonal dialect of Roermond (the Netherlands). A set of words with identical phoneme sequences but distinct pitch contours, which represented different lexical meanings or discourse meanings (statement vs. question), were presented to native speakers as well as to a control group of speakers of Standard Dutch, a non-tone language. The stimuli were arranged in a mismatch paradigm, under three experimental conditions: in the first condition (lexical), the pitch contour differences between standard and deviant stimuli reflected differences between lexical meanings; in the second condition (intonational), the stimuli differed in their discourse meaning; in the third condition (combined), they differed both in their lexical and discourse meaning. In all three conditions, native as well as non-native responses showed a clear MMNm (magnetic mismatch negativity) in a time window from 150 to 250 ms after the divergence point of standard and deviant pitch contours. In the lexical condition, a stronger response was found over the left temporal cortex of native as well as non-native speakers. In the intonational condition, the same activation pattern was observed in the control group, but not in the group of native speakers, who showed a right-hemisphere dominance instead. Finally, in the combined (lexical and intonational) condition, brain reactions appeared to represent the summation of the patterns found in the other two conditions. In sum, the lateralization of pitch processing is condition-dependent in the native group only, which suggests that language experience determines how processes should be distributed over both temporal cortices, according to the functions available in the grammar.
  • Groen, W. B., Tesink, C. M. J. Y., Petersson, K. M., Van Berkum, J. J. A., Van der Gaag, R. J., Hagoort, P., & Buitelaar, J. K. (2010). Semantic, factual, and social language comprehension in adolescents with autism: An fMRI study. Cerebral Cortex, 20(8), 1937-1945. doi:10.1093/cercor/bhp264.

    Abstract

    Language in high-functioning autism is characterized by pragmatic and semantic deficits, and people with autism have a reduced tendency to integrate information. Because the left and right inferior frontal (LIF and RIF) regions are implicated with integration of speaker information, world knowledge, and semantic knowledge, we hypothesized that abnormal functioning of the LIF and RIF regions might contribute to pragmatic and semantic language deficits in autism. Brain activation of sixteen 12- to 18-year-old, high-functioning autistic participants was measured with functional magnetic resonance imaging during sentence comprehension and compared with that of twenty-six matched controls. The content of the pragmatic sentence was congruent or incongruent with respect to the speaker characteristics (male/female, child/adult, and upper class/lower class). The semantic- and world-knowledge sentences were congruent or incongruent with respect to semantic expectancies and factual expectancies about the world, respectively. In the semanticknowledge and world-knowledge condition, activation of the LIF region did not differ between groups. In sentences that required integration of speaker information, the autism group showed abnormally reduced activation of the LIF region. The results suggest that people with autism may recruit the LIF region in a different manner in tasks that demand integration of social information.
  • Kelly, S. D., Ozyurek, A., & Maris, E. (2010). Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension. Psychological Science, 21, 260-267. doi:10.1177/0956797609357327.

    Abstract

    Gesture and speech are assumed to form an integrated system during language production. Based on this view, we propose the integrated‐systems hypothesis, which explains two ways in which gesture and speech are integrated—through mutual and obligatory interactions—in language comprehension. Experiment 1 presented participants with action primes (e.g., someone chopping vegetables) and bimodal speech and gesture targets. Participants related primes to targets more quickly and accurately when they contained congruent information (speech: “chop”; gesture: chop) than when they contained incongruent information (speech: “chop”; gesture: twist). Moreover, the strength of the incongruence affected processing, with fewer errors for weak incongruities (speech: “chop”; gesture: cut) than for strong incongruities (speech: “chop”; gesture: twist). Crucial for the integrated‐systems hypothesis, this influence was bidirectional. Experiment 2 demonstrated that gesture’s influence on speech was obligatory. The results confirm the integrated‐systems hypothesis and demonstrate that gesture and speech form an integrated system in language comprehension.
  • Kos, M., Vosse, T. G., Van den Brink, D., & Hagoort, P. (2010). About edible restaurants: Conflicts between syntax and semantics as revealed by ERPs. Frontiers in Psychology, 1, E222. doi:10.3389/fpsyg.2010.00222.

    Abstract

    In order to investigate conflicts between semantics and syntax, we recorded ERPs, while participants read Dutch sentences. Sentences containing conflicts between syntax and semantics (Fred eats in a sandwich…/ Fred eats a restaurant…) elicited an N400. These results show that conflicts between syntax and semantics not necessarily lead to P600 effects and are in line with the processing competition account. According to this parallel account the syntactic and semantic processing streams are fully interactive and information from one level can influence the processing at another level. The relative strength of the cues of the processing streams determines which level is affected most strongly by the conflict. The processing competition account maintains the distinction between the N400 as index for semantic processing and the P600 as index for structural processing.
  • Ladd, D. R., & Dediu, D. (2010). Reply to Järvikivi et al. (2010) [Web log message]. Plos One. Retrieved from http://www.plosone.org/article/comments/info%3Adoi%2F10.1371%2Fjournal.pone.0012603.
  • Maguire, W., McMahon, A., Heggarty, P., & Dediu, D. (2010). The past, present, and future of English dialects: Quantifying convergence, divergence, and dynamic equilibrium. Language Variation and Change, 22, 69-104. doi:10.1017/S0954394510000013.

    Abstract

    This article reports on research which seeks to compare and measure the similarities between phonetic transcriptions in the analysis of relationships between varieties of English. It addresses the question of whether these varieties have been converging, diverging, or maintaining equilibrium as a result of endogenous and exogenous phonetic and phonological changes. We argue that it is only possible to identify such patterns of change by the simultaneous comparison of a wide range of varieties of a language across a data set that has not been specifically selected to highlight those changes that are believed to be important. Our analysis suggests that although there has been an obvious reduction in regional variation with the loss of traditional dialects of English and Scots, there has not been any significant convergence (or divergence) of regional accents of English in recent decades, despite the rapid spread of a number of features such as TH-fronting.
  • Merritt, D. J., Casasanto, D., & Brannon, E. M. (2010). Do monkeys think in metaphors? Representations of space and time in monkeys and humans. Cognition, 117, 191-202. doi:10.1016/j.cognition.2010.08.011.

    Abstract

    Research on the relationship between the representation of space and time has produced two contrasting proposals. ATOM posits that space and time are represented via a common magnitude system, suggesting a symmetrical relationship between space and time. According to metaphor theory, however, representations of time depend on representations of space asymmetrically. Previous findings in humans have supported metaphor theory. Here, we investigate the relationship between time and space in a nonverbal species, by testing whether non-human primates show space–time interactions consistent with metaphor theory or with ATOM. We tested two rhesus monkeys and 16 adult humans in a nonverbal task that assessed the influence of an irrelevant dimension (time or space) on a relevant dimension (space or time). In humans, spatial extent had a large effect on time judgments whereas time had a small effect on spatial judgments. In monkeys, both spatial and temporal manipulations showed large bi-directional effects on judgments. In contrast to humans, spatial manipulations in monkeys did not produce a larger effect on temporal judgments than the reverse. Thus, consistent with previous findings, human adults showed asymmetrical space–time interactions that were predicted by metaphor theory. In contrast, monkeys showed patterns that were more consistent with ATOM.
  • Meulenbroek, O., Kessels, R. P. C., De Rover, M., Petersson, K. M., Olde Rikkert, M. G. M., Rijpkema, M., & Fernández, G. (2010). Age-effects on associative object-location memory. Brain Research, 1315, 100-110. doi:10.1016/j.brainres.2009.12.011.

    Abstract

    Aging is accompanied by an impairment of associative memory. The medial temporal lobe and fronto-striatal network, both involved in associative memory, are known to decline functionally and structurally with age, leading to the so-called associative binding deficit and the resource deficit. Because the MTL and fronto-striatal network interact, they might also be able to support each other. We therefore employed an episodic memory task probing memory for sequences of object–location associations, where the demand on self-initiated processing was manipulated during encoding: either all the objects were visible simultaneously (rich environmental support) or every object became visible transiently (poor environmental support). Following the concept of resource deficit, we hypothesised that the elderly probably have difficulty using their declarative memory system when demands on self-initiated processing are high (poor environmental support). Our behavioural study showed that only the young use the rich environmental support in a systematic way, by placing the objects next to each other. With the task adapted for fMRI, we found that elderly showed stronger activity than young subjects during retrieval of environmentally richly encoded information in the basal ganglia, thalamus, left middle temporal/fusiform gyrus and right medial temporal lobe (MTL). These results indicate that rich environmental support leads to recruitment of the declarative memory system in addition to the fronto-striatal network in elderly, while the young use more posterior brain regions likely related to imagery. We propose that elderly try to solve the task by additional recruitment of stimulus-response associations, which might partly compensate their limited attentional resources.
  • Noordzij, M. L., Newman-Norlund, S. E., De Ruiter, J. P., Hagoort, P., Levinson, S. C., & Toni, I. (2010). Neural correlates of intentional communication. Frontiers in Neuroscience, 4, E188. doi:10.3389/fnins.2010.00188.

    Abstract

    We know a great deal about the neurophysiological mechanisms supporting instrumental actions, i.e. actions designed to alter the physical state of the environment. In contrast, little is known about our ability to select communicative actions, i.e. actions directly designed to modify the mental state of another agent. We have recently provided novel empirical evidence for a mechanism in which a communicator selects his actions on the basis of a prediction of the communicative intentions that an addressee is most likely to attribute to those actions. The main novelty of those finding was that this prediction of intention recognition is cerebrally implemented within the intention recognition system of the communicator, is modulated by the ambiguity in meaning of the communicative acts, and not by their sensorimotor complexity. The characteristics of this predictive mechanism support the notion that human communicative abilities are distinct from both sensorimotor and linguistic processes.
  • Ozyurek, A., Zwitserlood, I., & Perniss, P. M. (2010). Locative expressions in signed languages: A view from Turkish Sign Language (TID). Linguistics, 48(5), 1111-1145. doi:10.1515/LING.2010.036.

    Abstract

    Locative expressions encode the spatial relationship between two (or more) entities. In this paper, we focus on locative expressions in signed language, which use the visual-spatial modality for linguistic expression, specifically in
    Turkish Sign Language ( Türk İşaret Dili, henceforth TİD). We show that TİD uses various strategies in discourse to encode the relation between a Ground entity (i.e., a bigger and/or backgrounded entity) and a Figure entity (i.e., a
    smaller entity, which is in the focus of attention). Some of these strategies exploit affordances of the visual modality for analogue representation and support evidence for modality-specific effects on locative expressions in sign languages.
    However, other modality-specific strategies, e.g., the simultaneous expression of Figure and Ground, which have been reported for many other sign languages, occurs only sparsely in TİD. Furthermore, TİD uses categorical as well as analogical structures in locative expressions. On the basis of
    these findings, we discuss differences and similarities between signed and spoken languages to broaden our understanding of the range of structures used in natural language (i.e., in both the visual-spatial or oral-aural modalities) to encode locative relations. A general linguistic theory of spatial relations, and specifically of locative expressions, must take all structures that
    might arise in both modalities into account before it can generalize over the human language faculty.
  • Petrovic, P., Kalso, E., Petersson, K. M., Andersson, J., Fransson, P., & Ingvar, M. (2010). A prefrontal non-opioid mechanism in placebo analgesia. Pain, 150, 59-65. doi:10.1016/j.pain.2010.03.011.

    Abstract

    ehavioral studies have suggested that placebo analgesia is partly mediated by the endogenous opioid system. Expanding on these results we have shown that the opioid-receptor-rich rostral anterior cingulate cortex (rACC) is activated in both placebo and opioid analgesia. However, there are also differences between the two treatments. While opioids have direct pharmacological effects, acting on the descending pain inhibitory system, placebo analgesia depends on neocortical top-down mechanisms. An important difference may be that expectations are met to a lesser extent in placebo treatment as compared with a specific treatment, yielding a larger error signal. As these processes previously have been shown to influence other types of perceptual experiences, we hypothesized that they also may drive placebo analgesia. Imaging studies suggest that lateral orbitofrontal cortex (lObfc) and ventrolateral prefrontal cortex (vlPFC) are involved in processing expectation and error signals. We re-analyzed two independent functional imaging experiments related to placebo analgesia and emotional placebo to probe for a differential processing in these regions during placebo treatment vs. opioid treatment and to test if this activity is associated with the placebo response. In the first dataset lObfc and vlPFC showed an enhanced activation in placebo analgesia vs. opioid analgesia. Furthermore, the rACC activity co-varied with the prefrontal regions in the placebo condition specifically. A similar correlation between rACC and vlPFC was reproduced in another dataset involving emotional placebo and correlated with the degree of the placebo effect. Our results thus support that placebo is different from specific treatment with a prefrontal top-down influence on rACC.
  • Pijnacker, J., Geurts, B., Van Lambalgen, M., Buitelaar, J., & Hagoort, P. (2010). Exceptions and anomalies: An ERP study on context sensitivity in autism. Neuropsychologia, 48, 2940-2951. doi:10.1016/j.neuropsychologia.2010.06.003.

    Abstract

    Several studies have demonstrated that people with ASD and intact language skills still have problems processing linguistic information in context. Given this evidence for reduced sensitivity to linguistic context, the question arises how contextual information is actually processed by people with ASD. In this study, we used event-related brain potentials (ERPs) to examine context sensitivity in high-functioning adults with autistic disorder (HFA) and Asperger syndrome at two levels: at the level of sentence processing and at the level of solving reasoning problems. We found that sentence context as well as reasoning context had an immediate ERP effect in adults with Asperger syndrome, as in matched controls. Both groups showed a typical N400 effect and a late positive component for the sentence conditions, and a sustained negativity for the reasoning conditions. In contrast, the HFA group demonstrated neither an N400 effect nor a sustained negativity. However, the HFA group showed a late positive component which was larger for semantically anomalous sentences than congruent sentences. Because sentence context had a modulating effect in a later phase, semantic integration is perhaps less automatic in HFA, and presumably more elaborate processes are needed to arrive at a sentence interpretation.
  • Ringersma, J., Kastens, K., Tschida, U., & Van Berkum, J. J. A. (2010). A principled approach to online publication listings and scientific resource sharing. The Code4Lib Journal, 2010(9), 2520.

    Abstract

    The Max Planck Institute (MPI) for Psycholinguistics has developed a service to manage and present the scholarly output of their researchers. The PubMan database manages publication metadata and full-texts of publications published by their scholars. All relevant information regarding a researcher’s work is brought together in this database, including supplementary materials and links to the MPI database for primary research data. The PubMan metadata is harvested into the MPI website CMS (Plone). The system developed for the creation of the publication lists, allows the researcher to create a selection of the harvested data in a variety of formats.
  • De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., Levinson, S. C., & Toni, I. (2010). Exploring the cognitive infrastructure of communication. Interaction studies, 11, 51-77. doi:10.1075/is.11.1.05rui.

    Abstract

    Human communication is often thought about in terms of transmitted messages in a conventional code like a language. But communication requires a specialized interactive intelligence. Senders have to be able to perform recipient design, while receivers need to be able to do intention recognition, knowing that recipient design has taken place. To study this interactive intelligence in the lab, we developed a new task that taps directly into the underlying abilities to communicate in the absence of a conventional code. We show that subjects are remarkably successful communicators under these conditions, especially when senders get feedback from receivers. Signaling is accomplished by the manner in which an instrumental action is performed, such that instrumentally dysfunctional components of an action are used to convey communicative intentions. The findings have important implications for the nature of the human communicative infrastructure, and the task opens up a line of experimentation on human communication.
  • Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2010). Identifying object categories from event-related EEG: Toward decoding of conceptual representations. Plos One, 5(12), E14465. doi:10.1371/journal.pone.0014465.

    Abstract

    Multivariate pattern analysis is a technique that allows the decoding of conceptual information such as the semantic category of a perceived object from neuroimaging data. Impressive single-trial classification results have been reported in studies that used fMRI. Here, we investigate the possibility to identify conceptual representations from event-related EEG based on the presentation of an object in different modalities: its spoken name, its visual representation and its written name. We used Bayesian logistic regression with a multivariate Laplace prior for classification. Marked differences in classification performance were observed for the tested modalities. Highest accuracies (89% correctly classified trials) were attained when classifying object drawings. In auditory and orthographical modalities, results were lower though still significant for some subjects. The employed classification method allowed for a precise temporal localization of the features that contributed to the performance of the classifier for three modalities. These findings could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept decoding in the context of real-time brain-computer interface applications.
  • Snijders, T. M., Petersson, K. M., & Hagoort, P. (2010). Effective connectivity of cortical and subcortical regions during unification of sentence structure. NeuroImage, 52, 1633-1644. doi:10.1016/j.neuroimage.2010.05.035.

    Abstract

    In a recent fMRI study we showed that left posterior middle temporal gyrus (LpMTG) subserves the retrieval of a word's lexical-syntactic properties from the mental lexicon (long-term memory), while left posterior inferior frontal gyrus (LpIFG) is involved in unifying (on-line integration of) this information into a sentence structure (Snijders et al., 2009). In addition, the right IFG, right MTG, and the right striatum were involved in the unification process. Here we report results from a psychophysical interactions (PPI) analysis in which we investigated the effective connectivity between LpIFG and LpMTG during unification, and how the right hemisphere areas and the striatum are functionally connected to the unification network. LpIFG and LpMTG both showed enhanced connectivity during the unification process with a region slightly superior to our previously reported LpMTG. Right IFG better predicted right temporal activity when unification processes were more strongly engaged, just as LpIFG better predicted left temporal activity. Furthermore, the striatum showed enhanced coupling to LpIFG and LpMTG during unification. We conclude that bilateral inferior frontal and posterior temporal regions are functionally connected during sentence-level unification. Cortico-subcortical connectivity patterns suggest cooperation between inferior frontal and striatal regions in performing unification operations on lexical-syntactic representations retrieved from LpMTG.
  • Uddén, J., Folia, V., & Petersson, K. M. (2010). The neuropharmacology of implicit learning. Current Neuropharmacology, 8, 367-381. doi:10.2174/157015910793358178.

    Abstract

    Two decades of pharmacologic research on the human capacity to implicitly acquire knowledge as well as cognitive skills and procedures have yielded surprisingly few conclusive insights. We review the empirical literature of the neuropharmacology of implicit learning. We evaluate the findings in the context of relevant computational models related to neurotransmittors such as dopamine, serotonin, acetylcholine and noradrenalin. These include models for reinforcement learning, sequence production, and categorization. We conclude, based on the reviewed literature, that one can predict improved implicit acquisition by moderately elevated dopamine levels and impaired implicit acquisition by moderately decreased dopamine levels. These effects are most prominent in the dorsal striatum. This is supported by a range of behavioral tasks in the empirical literature. Similar predictions can be made for serotonin, although there is yet a lack of support in the literature for serotonin involvement in classical implicit learning tasks. There is currently a lack of evidence for a role of the noradrenergic and cholinergic systems in implicit and related forms of learning. GABA modulators, including benzodiazepines, seem to affect implicit learning in a complex manner and further research is needed. Finally, we identify allosteric AMPA receptors modulators as a potentially interesting target for future investigation of the neuropharmacology of procedural and implicit learning.
  • Van Alphen, P. M., & Van Berkum, J. J. A. (2010). Is there pain in champagne? Semantic involvement of words within words during sense-making. Journal of Cognitive Neuroscience, 22, 2618-2626. doi:10.1162/jocn.2009.21336.

    Abstract

    In an ERP experiment, we examined whether listeners, when making sense of spoken utterances, take into account the meaning of spurious words that are embedded in longer words, either at their onsets (e. g., pie in pirate) or at their offsets (e. g., pain in champagne). In the experiment, Dutch listeners heard Dutch words with initial or final embeddings presented in a sentence context that did or did not support the meaning of the embedded word, while equally supporting the longer carrier word. The N400 at the carrier words was modulated by the semantic fit of the embedded words, indicating that listeners briefly relate the meaning of initial-and final-embedded words to the sentential context, even though these words were not intended by the speaker. These findings help us understand the dynamics of initial sense-making and its link to lexical activation. In addition, they shed new light on the role of lexical competition and the debate concerning the lexical activation of final-embedded words.
  • Van Berkum, J. J. A. (2010). The brain is a prediction machine that cares about good and bad - Any implications for neuropragmatics? Italian Journal of Linguistics, 22, 181-208.

    Abstract

    Experimental pragmatics asks how people construct contextualized meaning in communication. So what does it mean for this field to add neuroas a prefix to its name? After analyzing the options for any subfield of cognitive science, I argue that neuropragmatics can and occasionally should go beyond the instrumental use of EEG or fMRI and beyond mapping classic theoretical distinctions onto Brodmann areas. In particular, if experimental pragmatics ‘goes neuro’, it should take into account that the brain evolved as a control system that helps its bearer negotiate a highly complex, rapidly changing and often not so friendly environment. In this context, the ability to predict current unknowns, and to rapidly tell good from bad, are essential ingredients of processing. Using insights from non-linguistic areas of cognitive neuroscience as well as from EEG research on utterance comprehension, I argue that for a balanced development of experimental pragmatics, these two characteristics of the brain cannot be ignored.
  • Van Leeuwen, T. M., Petersson, K. M., & Hagoort, P. (2010). Synaesthetic colour in the brain: Beyond colour areas. A functional magnetic resonance imaging study of synaesthetes and matched controls. PLoS One, 5(8), E12074. doi:10.1371/journal.pone.0012074.

    Abstract

    Background: In synaesthesia, sensations in a particular modality cause additional experiences in a second, unstimulated modality (e.g., letters elicit colour). Understanding how synaesthesia is mediated in the brain can help to understand normal processes of perceptual awareness and multisensory integration. In several neuroimaging studies, enhanced brain activity for grapheme-colour synaesthesia has been found in ventral-occipital areas that are also involved in real colour processing. Our question was whether the neural correlates of synaesthetically induced colour and real colour experience are truly shared. Methodology/Principal Findings: First, in a free viewing functional magnetic resonance imaging (fMRI) experiment, we located main effects of synaesthesia in left superior parietal lobule and in colour related areas. In the left superior parietal lobe, individual differences between synaesthetes (projector-associator distinction) also influenced brain activity, confirming the importance of the left superior parietal lobe for synaesthesia. Next, we applied a repetition suppression paradigm in fMRI, in which a decrease in the BOLD (blood-oxygenated-level-dependent) response is generally observed for repeated stimuli. We hypothesized that synaesthetically induced colours would lead to a reduction in BOLD response for subsequently presented real colours, if the neural correlates were overlapping. We did find BOLD suppression effects induced by synaesthesia, but not within the colour areas. Conclusions/Significance: Because synaesthetically induced colours were not able to suppress BOLD effects for real colour, we conclude that the neural correlates of synaesthetic colour experience and real colour experience are not fully shared. We propose that synaesthetic colour experiences are mediated by higher-order visual pathways that lie beyond the scope of classical, ventral-occipital visual areas. Feedback from these areas, in which the left parietal cortex is likely to play an important role, may induce V4 activation and the percept of synaesthetic colour.
  • De Vries, M., Barth, A. C. R., Maiworm, S., Knecht, S., Zwitserlood, P., & Flöel, A. (2010). Electrical stimulation of Broca’s area enhances implicit learning of an artificial grammar. Journal of Cognitive Neuroscience, 22, 2427-2436. doi:10.1162/jocn.2009.21385.

    Abstract

    Artificial grammar learning constitutes a well-established model for the acquisition of grammatical knowledge in a natural setting. Previous neuroimaging studies demonstrated that Broca's area (left BA 44/45) is similarly activated by natural syntactic processing and artificial grammar learning. The current study was conducted to investigate the causal relationship between Broca's area and learning of an artificial grammar by means of transcranial direct current stimulation (tDCS). Thirty-eight healthy subjects participated in a between-subject design, with either anodal tDCS (20 min, 1 mA) or sham stimulation, over Broca's area during the acquisition of an artificial grammar. Performance during the acquisition phase, presented as a working memory task, was comparable between groups. In the subsequent classification task, detecting syntactic violations, and specifically, those where no cues to superficial similarity were available, improved significantly after anodal tDCS, resulting in an overall better performance. A control experiment where 10 subjects received anodal tDCS over an area unrelated to artificial grammar learning further supported the specificity of these effects to Broca's area. We conclude that Broca's area is specifically involved in rule-based knowledge, and here, in an improved ability to detect syntactic violations. The results cannot be explained by better tDCS-induced working memory performance during the acquisition phase. This is the first study that demonstrates that tDCS may facilitate acquisition of grammatical knowledge, a finding of potential interest for rehabilitation of aphasia.
  • De Vries, M., Ulte, C., Zwitserlood, P., Szymanski, B., & Knecht, S. (2010). Increasing dopamine levels in the brain improves feedback-based procedural learning in healthy participants: An artificial-grammar-learning experiment. Neuropsychologia, 48, 3193-3197. doi:10.1016/j.neuropsychologia.2010.06.024.

    Abstract

    Recently, an increasing number of studies have suggested a role for the basal ganglia and related dopamine inputs in procedural learning, specifically when learning occurs through trial-by-trial feedback (Shohamy, Myers, Kalanithi, & Gluck. (2008). Basal ganglia and dopamine contributions to probabilistic category learning. Neuroscience and Biobehavioral Reviews, 32, 219–236). A necessary relationship has however only been demonstrated in patient studies. In the present study, we show for the first time that increasing dopamine levels in the brain improves the gradual acquisition of complex information in healthy participants. We implemented two artificial-grammar-learning tasks, one with and one without performance feedback. Learning was improved after levodopa intake for the feedback-based learning task only, suggesting that dopamine plays a specific role in trial-by-trial feedback-based learning. This provides promising directions for future studies on dopaminergic modulation of cognitive functioning.
  • Willems, R. M., Hagoort, P., & Casasanto, D. (2010). Body-specific representations of action verbs: Neural evidence from right- and left-handers. Psychological Science, 21, 67-74. doi:10.1177/0956797609354072.

    Abstract

    According to theories of embodied cognition, understanding a verb like throw involves unconsciously simulating the action of throwing, using areas of the brain that support motor planning. If understanding action words involves mentally simulating one’s own actions, then the neurocognitive representation of word meanings should differ for people with different kinds of bodies, who perform actions in systematically different ways. In a test of the body-specificity hypothesis, we used functional magnetic resonance imaging to compare premotor activity correlated with action verb understanding in right- and left-handers. Righthanders preferentially activated the left premotor cortex during lexical decisions on manual-action verbs (compared with nonmanual-action verbs), whereas left-handers preferentially activated right premotor areas. This finding helps refine theories of embodied semantics, suggesting that implicit mental simulation during language processing is body specific: Right- and lefthanders, who perform actions differently, use correspondingly different areas of the brain for representing action verb meanings.
  • Willems, R. M., Peelen, M. V., & Hagoort, P. (2010). Cerebral lateralization of face-selective and body-selective visual areas depends on handedness. Cerebral Cortex, 20, 1719-1725. doi:10.1093/cercor/bhp234.

    Abstract

    The left-hemisphere dominance for language is a core example of the functional specialization of the cerebral hemispheres. The degree of left-hemisphere dominance for language depends on hand preference: Whereas the majority of right-handers show left-hemispheric language lateralization, this number is reduced in left-handers. Here, we assessed whether handedness analogously has an influence upon lateralization in the visual system. Using functional magnetic resonance imaging, we localized 4 more or less specialized extrastriate areas in left- and right-handers, namely fusiform face area (FFA), extrastriate body area (EBA), fusiform body area (FBA), and human motion area (human middle temporal [hMT]). We found that lateralization of FFA and EBA depends on handedness: These areas were right lateralized in right-handers but not in left-handers. A similar tendency was observed in FBA but not in hMT. We conclude that the relationship between handedness and hemispheric lateralization extends to functionally lateralized parts of visual cortex, indicating a general coupling between cerebral lateralization and handedness. Our findings indicate that hemispheric specialization is not fixed but can vary considerably across individuals even in areas engaged relatively early in the visual system.
  • Willems, R. M., De Boer, M., De Ruiter, J. P., Noordzij, M. L., Hagoort, P., & Toni, I. (2010). A dissociation between linguistic and communicative abilities in the human brain. Psychological Science, 21, 8-14. doi:10.1177/0956797609355563.

    Abstract

    Although language is an effective vehicle for communication, it is unclear how linguistic and communicative abilities relate to each other. Some researchers have argued that communicative message generation involves perspective taking (mentalizing), and—crucially—that mentalizing depends on language. We employed a verbal communication paradigm to directly test whether the generation of a communicative action relies on mentalizing and whether the cerebral bases of communicative message generation are distinct from parts of cortex sensitive to linguistic variables. We found that dorsomedial prefrontal cortex, a brain area consistently associated with mentalizing, was sensitive to the communicative intent of utterances, irrespective of linguistic difficulty. In contrast, left inferior frontal cortex, an area known to be involved in language, was sensitive to the linguistic demands of utterances, but not to communicative intent. These findings show that communicative and linguistic abilities rely on cerebrally (and computationally) distinct mechanisms
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2010). Neural dissociations between action verb understanding and motor imagery. Journal of Cognitive Neuroscience, 22(10), 2387-2400. doi:10.1162/jocn.2009.21386.

    Abstract

    According to embodied theories of language, people understand a verb like throw, at least in part, by mentally simulating throwing. This implicit simulation is often assumed to be similar or identical to motor imagery. Here we used fMRI totest whether implicit simulations of actions during language understanding involve the same cortical motor regions as explicit motor imagery Healthy participants were presented with verbs related to hand actions (e.g., to throw) and nonmanual actions (e.g., to kneel). They either read these verbs (lexical decision task) or actively imagined performing the actions named by the verbs (imagery task). Primary motor cortex showd effector-specific activation during imagery, but not during lexical decision. Parts of premotor cortex distinguished manual from nonmanual actions during both lexical decision and imagery, but there was no overlap or correlation between regions activated during the two tasks. These dissociations suggest that implicit simulation and explicit imagery cued by action verbs may involve different types of motor representations and that the construct of “mental simulation” should be distinguished from “mental imagery” in embodied theories of language.
  • Xiang, H.-D., Fonteijn, H. M., Norris, D. G., & Hagoort, P. (2010). Topographical functional connectivity pattern in the perisylvian language networks. Cerebral Cortex, 20, 549-560. doi:10.1093/cercor/bhp119.

    Abstract

    We performed a resting-state functional connectivity study to investigate directly the functional correlations within the perisylvian language networks by seeding from 3 subregions of Broca's complex (pars opercularis, pars triangularis, and pars orbitalis) and their right hemisphere homologues. A clear topographical functional connectivity pattern in the left middle frontal, parietal, and temporal areas was revealed for the 3 left seeds. This is the first demonstration that a functional connectivity topology can be observed in the perisylvian language networks. The results support the assumption of the functional division for phonology, syntax, and semantics of Broca's complex as proposed by the memory, unification, and control (MUC) model and indicated a topographical functional organization in the perisylvian language networks, which suggests a possible division of labor for phonological, syntactic, and semantic function in the left frontal, parietal, and temporal areas.
  • Baggio, G., Van Lambalgen, M., & Hagoort, P. (2008). Computing and recomputing discourse models: An ERP study. Journal of Memory and Language, 59, 36-53. doi:10.1016/j.jml.2008.02.005.

    Abstract

    While syntactic reanalysis has been extensively investigated in psycholinguistics, comparatively little is known about reanalysis in the semantic domain. We used event-related brain potentials (ERPs) to keep track of semantic processes involved in understanding short narratives such as ‘The girl was writing a letter when her friend spilled coffee on the paper’. We hypothesize that these sentences are interpreted in two steps: (1) when the progressive clause is processed, a discourse model is computed in which the goal state (a complete letter) is predicted to hold; (2) when the subordinate clause is processed, the initial representation is recomputed to the effect that, in the final discourse structure, the goal state is not satisfied. Critical sentences evoked larger sustained anterior negativities (SANs) compared to controls, starting around 400 ms following the onset of the sentence-final word, and lasting for about 400 ms. The amplitude of the SAN was correlated with the frequency with which participants, in an offline probe-selection task, responded that the goal state was not attained. Our results raise the possibility that the brain supports some form of non-monotonic recomputation to integrate information which invalidates previously held assumptions.
  • Bastiaansen, M. C. M., Oostenveld, R., Jensen, O., & Hagoort, P. (2008). I see what you mean: Theta power increases are involved in the retrieval of lexical semantic information. Brain and Language, 106(1), 15-28. doi:10.1016/j.bandl.2007.10.006.

    Abstract

    An influential hypothesis regarding the neural basis of the mental lexicon is that semantic representations are neurally implemented as distributed networks carrying sensory, motor and/or more abstract functional information. This work investigates whether the semantic properties of words partly determine the topography of such networks. Subjects performed a visual lexical decision task while their EEG was recorded. We compared the EEG responses to nouns with either visual semantic properties (VIS, referring to colors and shapes) or with auditory semantic properties (AUD, referring to sounds). A time–frequency analysis of the EEG revealed power increases in the theta (4–7 Hz) and lower-beta (13–18 Hz) frequency bands, and an early power increase and subsequent decrease for the alpha (8–12 Hz) band. In the theta band we observed a double dissociation: temporal electrodes showed larger theta power increases in the AUD condition, while occipital leads showed larger theta responses in the VIS condition. The results support the notion that semantic representations are stored in functional networks with a topography that reflects the semantic properties of the stored items, and provide further evidence that oscillatory brain dynamics in the theta frequency range are functionally related to the retrieval of lexical semantic information.
  • Casasanto, D. (2008). Similarity and proximity: When does close in space mean close in mind? Memory & Cognition, 36(6), 1047-1056. doi:10.3758/MC.36.6.1047.

    Abstract

    People often describe things that are similar as close and things that are dissimilar as far apart. Does the way people talk about similarity reveal something fundamental about the way they conceptualize it? Three experiments tested the relationship between similarity and spatial proximity that is encoded in metaphors in language. Similarity ratings for pairs of words or pictures varied as a function of how far apart the stimuli appeared on the computer screen, but the influence of distance on similarity differed depending on the type of judgments the participants made. Stimuli presented closer together were rated more similar during conceptual judgments of abstract entities or unseen object properties but were rated less similar during perceptual judgments of visual appearance. These contrasting results underscore the importance of testing predictions based on linguistic metaphors experimentally and suggest that our sense of similarity arises from our ability to combine available perceptual information with stored knowledge of experiential regularities.
  • Casasanto, D. (2008). Who's afraid of the big bad Whorf? Crosslinguistic differences in temporal language and thought. Language Learning, 58(suppl. 1), 63-79. doi:10.1111/j.1467-9922.2008.00462.x.

    Abstract

    The idea that language shapes the way we think, often associated with Benjamin Whorf, has long been decried as not only wrong but also fundamentally wrong-headed. Yet, experimental evidence has reopened debate about the extent to which language influences nonlinguistic cognition, particularly in the domain of time. In this article, I will first analyze an influential argument against the Whorfian hypothesis and show that its anti-Whorfian conclusion is in part an artifact of conflating two distinct questions: Do we think in language? and Does language shape thought? Next, I will discuss crosslinguistic differences in spatial metaphors for time and describe experiments that demonstrate corresponding differences in nonlinguistic mental representations. Finally, I will sketch a simple learning mechanism by which some linguistic relativity effects appear to arise. Although people may not think in language, speakers of different languages develop distinctive conceptual repertoires as a consequence of ordinary and presumably universal neural and cognitive processes.
  • Casasanto, D., & Boroditsky, L. (2008). Time in the mind: Using space to think about time. Cognition, 106, 579-573. doi:10.1016/j.cognition.2007.03.004.

    Abstract

    How do we construct abstract ideas like justice, mathematics, or time-travel? In this paper we investigate whether mental representations that result from physical experience underlie people’s more abstract mental representations, using the domains of space and time as a testbed. People often talk about time using spatial language (e.g., a long vacation, a short concert). Do people also think about time using spatial representations, even when they are not using language? Results of six psychophysical experiments revealed that people are unable to ignore irrelevant spatial information when making judgments about duration, but not the converse. This pattern, which is predicted by the asymmetry between space and time in linguistic metaphors, was demonstrated here in tasks that do not involve any linguistic stimuli or responses. These findings provide evidence that the metaphorical relationship between space and time observed in language also exists in our more basic representations of distance and duration. Results suggest that our mental representations of things we can never see or touch may be built, in part, out of representations of physical experiences in perception and motor action.
  • Folia, V., Uddén, J., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2008). Implicit learning and dyslexia. Annals of the New York Academy of Sciences, 1145, 132-150. doi:10.1196/annals.1416.012.

    Abstract

    Several studies have reported an association between dyslexia and implicit learning deficits. It has been suggested that the weakness in implicit learning observed in dyslexic individuals may be related to sequential processing and implicit sequence learning. In the present article, we review the current literature on implicit learning and dyslexia. We describe a novel, forced-choice structural "mere exposure" artificial grammar learning paradigm and characterize this paradigm in normal readers in relation to the standard grammaticality classification paradigm. We argue that preference classification is a more optimal measure of the outcome of implicit acquisition since in the preference version participants are kept completely unaware of the underlying generative mechanism, while in the grammaticality version, the subjects have, at least in principle, been informed about the existence of an underlying complex set of rules at the point of classification (but not during acquisition). On the basis of the "mere exposure effect," we tested the prediction that the development of preference will correlate with the grammaticality status of the classification items. In addition, we examined the effects of grammaticality (grammatical/nongrammatical) and associative chunk strength (ACS; high/low) on the classification tasks (preference/grammaticality). Using a balanced ACS design in which the factors of grammaticality (grammatical/nongrammatical) and ACS (high/low) were independently controlled in a 2 × 2 factorial design, we confirmed our predictions. We discuss the suitability of this task for further investigation of the implicit learning characteristics in dyslexia.
  • Forkstam, C., Elwér, A., Ingvar, M., & Petersson, K. M. (2008). Instruction effects in implicit artificial grammar learning: A preference for grammaticality. Brain Research, 1221, 80-92. doi:10.1016/j.brainres.2008.05.005.

    Abstract

    Human implicit learning can be investigated with implicit artificial grammar learning, a paradigm that has been proposed as a simple model for aspects of natural language acquisition. In the present study we compared the typical yes–no grammaticality classification, with yes–no preference classification. In the case of preference instruction no reference to the underlying generative mechanism (i.e., grammar) is needed and the subjects are therefore completely uninformed about an underlying structure in the acquisition material. In experiment 1, subjects engaged in a short-term memory task using only grammatical strings without performance feedback for 5 days. As a result of the 5 acquisition days, classification performance was independent of instruction type and both the preference and the grammaticality group acquired relevant knowledge of the underlying generative mechanism to a similar degree. Changing the grammatical stings to random strings in the acquisition material (experiment 2) resulted in classification being driven by local substring familiarity. Contrasting repeated vs. non-repeated preference classification (experiment 3) showed that the effect of local substring familiarity decreases with repeated classification. This was not the case for repeated grammaticality classifications. We conclude that classification performance is largely independent of instruction type and that forced-choice preference classification is equivalent to the typical grammaticality classification.
  • Goldin-Meadow, S., Chee So, W., Ozyurek, A., & Mylander, C. (2008). The natural order of events: how speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences of the USA, 105(27), 9163-9168. doi:10.1073/pnas.0710060105.

    Abstract

    To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor–patient–act, is analogous to the subject–object–verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.

    Additional information

    GoldinMeadow_2008_naturalSuppl.pdf
  • Hagoort, P. (2008). Should psychology ignore the language of the brain? Current Directions in Psychological Science, 17(2), 96-101. doi:10.1111/j.1467-8721.2008.00556.x.

    Abstract

    Claims that neuroscientific data do not contribute to our understanding of psychological functions have been made recently. Here I argue that these criticisms are solely based on an analysis of functional magnetic resonance imaging (fMRI) studies. However, fMRI is only one of the methods in the toolkit of cognitive neuroscience. I provide examples from research on event-related brain potentials (ERPs) that have contributed to our understanding of the cognitive architecture of human language functions. In addition, I provide evidence of (possible) contributions from fMRI measurements to our understanding of the functional architecture of language processing. Finally, I argue that a neurobiology of human language that integrates information about the necessary genetic and neural infrastructures will allow us to answer certain questions that are not answerable if all we have is evidence from behavior.
  • Hagoort, P. (2008). The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 363, 1055-1069. doi:10.1098/rstb.2007.2159.

    Abstract

    This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding. Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no ‘magic moment’ when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Moreover, for one particular event-related brain potential (ERP) component (the N400), equivalent impact of sentence- and discourse-semantic contexts is observed. This indicates that in comprehension, a spoken word is immediately evaluated relative to the widest interpretive domain available. In addition, this happens very quickly. Findings are discussed that show that often an unfolding word can be mapped onto discourse-level representations well before the end of the word. Overall, the time course of the ERP effects is compatible with the view that the different information types (lexical, syntactic, phonological, pragmatic) are processed in parallel and influence the interpretation process incrementally, that is as soon as the relevant pieces of information are available. This is referred to as the immediacy principle.
  • Li, X., Hagoort, P., & Yang, Y. (2008). Event-related potential evidence on the influence of accentuation in spoken discourse comprehension in Chinese. Journal of Cognitive Neuroscience, 20(5), 906-915. doi:10.1162/jocn.2008.20512.

    Abstract

    In an event-related potential experiment with Chinese discourses as material, we investigated how and when accentuation influences spoken discourse comprehension in relation to the different information states of the critical words. These words could either provide new or old information. It was shown that variation of accentuation influenced the amplitude of the N400, with a larger amplitude for accented than deaccented words. In addition, there was an interaction between accentuation and information state. The N400 amplitude difference between accented and deaccented new information was smaller than that between accented and deaccented old information. The results demonstrate that, during spoken discourse comprehension, listeners rapidly extract the semantic consequences of accentuation in relation to the previous discourse context. Moreover, our results show that the N400 amplitude can be larger for correct (new,accented words) than incorrect (new, deaccented words) information. This, we argue, proves that the N400 does not react to semantic anomaly per se, but rather to semantic integration load, which is higher for new information.
  • Hagoort, P. (2008). Mijn omweg naar de filosofie. Algemeen Nederlands Tijdschrift voor Wijsbegeerte, 100(4), 303-310.
  • Janzen, G., Jansen, C., & Van Turennout, M. (2008). Memory consolidation of landmarks in good navigators. Hippocampus, 18, 40-47.

    Abstract

    Landmarks play an important role in successful navigation. To successfully find your way around an environment, navigationally relevant information needs to be stored and become available at later moments in time. Evidence from functional magnetic resonance imaging (fMRI) studies shows that the human parahippocampal gyrus encodes the navigational relevance of landmarks. In the present event-related fMRI experiment, we investigated memory consolidation of navigationally relevant landmarks in the medial temporal lobe after route learning. Sixteen right-handed volunteers viewed two film sequences through a virtual museum with objects placed at locations relevant (decision points) or irrelevant (nondecision points) for navigation. To investigate consolidation effects, one film sequence was seen in the evening before scanning, the other one was seen the following morning, directly before scanning. Event-related fMRI data were acquired during an object recognition task. Participants decided whether they had seen the objects in the previously shown films. After scanning, participants answered standardized questions about their navigational skills, and were divided into groups of good and bad navigators, based on their scores. An effect of memory consolidation was obtained in the hippocampus: Objects that were seen the evening before scanning (remote objects) elicited more activity than objects seen directly before scanning (recent objects). This increase in activity in bilateral hippocampus for remote objects was observed in good navigators only. In addition, a spatial-specific effect of memory consolidation for navigationally relevant objects was observed in the parahippocampal gyrus. Remote decision point objects induced increased activity as compared with recent decision point objects, again in good navigators only. The results provide initial evidence for a connection between memory consolidation and navigational ability that can provide a basis for successful navigation.
  • Kho, K. H., Indefrey, P., Hagoort, P., Van Veelen, C. W. M., Van Rijen, P. C., & Ramsey, N. F. (2008). Unimpaired sentence comprehension after anterior temporal cortex resection. Neuropsychologia, 46(4), 1170-1178. doi:10.1016/j.neuropsychologia.2007.10.014.

    Abstract

    Functional imaging studies have demonstrated involvement of the anterior temporal cortex in sentence comprehension. It is unclear, however, whether the anterior temporal cortex is essential for this function.We studied two aspects of sentence comprehension, namely syntactic and prosodic comprehension in temporal lobe epilepsy patients who were candidates for resection of the anterior temporal lobe. Methods: Temporal lobe epilepsy patients (n = 32) with normal (left) language dominance were tested on syntactic and prosodic comprehension before and after removal of the anterior temporal cortex. The prosodic comprehension test was also compared with performance of healthy control subjects (n = 47) before surgery. Results: Overall, temporal lobe epilepsy patients did not differ from healthy controls in syntactic and prosodic comprehension before surgery. They did perform less well on an affective prosody task. Post-operative testing revealed that syntactic and prosodic comprehension did not change after removal of the anterior temporal cortex. Discussion: The unchanged performance on syntactic and prosodic comprehension after removal of the anterior temporal cortex suggests that this area is not indispensable for sentence comprehension functions in temporal epilepsy patients. Potential implications for the postulated role of the anterior temporal lobe in the healthy brain are discussed.
  • Ladd, D. R., Dediu, D., & Kinsella, A. R. (2008). Reply to Bowles (2008). Biolinguistics, 2(2), 256-259.
  • De Lange, F. P., Koers, A., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Meer, J. W. M., & Toni, I. (2008). Increase in prefrontal cortical volume following cognitive behavioural therapy in patients with chronic fatigue syndrome. Brain, 131, 2172-2180. doi:10.1093/brain/awn140.

    Abstract

    Chronic fatigue syndrome (CFS) is a disabling disorder, characterized by persistent or relapsing fatigue. Recent studies have detected a decrease in cortical grey matter volume in patients with CFS, but it is unclear whether this cerebral atrophy constitutes a cause or a consequence of the disease. Cognitive behavioural therapy (CBT) is an effective behavioural intervention for CFS, which combines a rehabilitative approach of a graded increase in physical activity with a psychological approach that addresses thoughts and beliefs about CFS which may impair recovery. Here, we test the hypothesis that cerebral atrophy may be a reversible state that can ameliorate with successful CBT. We have quantified cerebral structural changes in 22 CFS patients that underwent CBT and 22 healthy control participants. At baseline, CFS patients had significantly lower grey matter volume than healthy control participants. CBT intervention led to a significant improvement in health status, physical activity and cognitive performance. Crucially, CFS patients showed a significant increase in grey matter volume, localized in the lateral prefrontal cortex. This change in cerebral volume was related to improvements in cognitive speed in the CFS patients. Our findings indicate that the cerebral atrophy associated with CFS is partially reversed after effective CBT. This result provides an example of macroscopic cortical plasticity in the adult human brain, demonstrating a surprisingly dynamic relation between behavioural state and cerebral anatomy. Furthermore, our results reveal a possible neurobiological substrate of psychotherapeutic treatment.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The neurocognition of referential ambiguity in language comprehension. Language and Linguistics Compass, 2(4), 603-630. doi:10.1111/j.1749-818x.2008.00070.x.

    Abstract

    Referential ambiguity arises whenever readers or listeners are unable to select a unique referent for a linguistic expression out of multiple candidates. In the current article, we review a series of neurocognitive experiments from our laboratory that examine the neural correlates of referential ambiguity, and that employ the brain signature of referential ambiguity to derive functional properties of the language comprehension system. The results of our experiments converge to show that referential ambiguity resolution involves making an inference to evaluate the referential candidates. These inferences only take place when both referential candidates are, at least initially, equally plausible antecedents. Whether comprehenders make these anaphoric inferences is strongly context dependent and co-determined by characteristics of the reader. In addition, readers appear to disregard referential ambiguity when the competing candidates are each semantically incoherent, suggesting that, under certain circumstances, semantic analysis can proceed even when referential analysis has not yielded a unique antecedent. Finally, results from a functional neuroimaging study suggest that whereas the neural systems that deal with referential ambiguity partially overlap with those that deal with referential failure, they show an inverse coupling with the neural systems associated with semantic processing, possibly reflecting the relative contributions of semantic and episodic processing to re-establish semantic and referential coherence, respectively.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The interplay between semantic and referential aspects of anaphoric noun phrase resolution: Evidence from ERPs. Brain & Language, 106, 119-131. doi:10.1016/j.bandl.2008.05.001.

    Abstract

    In this event-related brain potential (ERP) study, we examined how semantic and referential aspects of anaphoric noun phrase resolution interact during discourse comprehension. We used a full factorial design that crossed referential ambiguity with semantic incoherence. Ambiguous anaphors elicited a sustained negative shift (Nref effect), and incoherent anaphors elicited an N400 effect. Simultaneously ambiguous and incoherent anaphors elicited an ERP pattern resembling that of the incoherent anaphors. These results suggest that semantic incoherence can preclude readers from engaging in anaphoric inferencing. Furthermore, approximately half of our participants unexpectedly showed common late positive effects to the three types of problematic anaphors. We relate the latter finding to recent accounts of what the P600 might reflect, and to the role of individual differences therein.
  • Otten, M., & Van Berkum, J. J. A. (2008). Discourse-based word anticipation during language processing: Prediction of priming? Discourse Processes, 45, 464-496. doi:10.1080/01638530802356463.

    Abstract

    Language is an intrinsically open-ended system. This fact has led to the widely shared assumption that readers and listeners do not predict upcoming words, at least not in a way that goes beyond simple priming between words. Recent evidence, however, suggests that readers and listeners do anticipate upcoming words “on the fly” as a text unfolds. In 2 event-related potentials experiments, this study examined whether these predictions are based on the exact message conveyed by the prior discourse or on simpler word-based priming mechanisms. Participants read texts that strongly supported the prediction of a specific word, mixed with non-predictive control texts that contained the same prime words. In Experiment 1A, anomalous words that replaced a highly predictable (as opposed to a non-predictable but coherent) word elicited a long-lasting positive shift, suggesting that the prior discourse had indeed led people to predict specific words. In Experiment 1B, adjectives whose suffix mismatched the predictable noun's syntactic gender elicited a short-lived late negativity in predictive stories but not in prime control stories. Taken together, these findings reveal that the conceptual basis for predicting specific upcoming words during reading is the exact message conveyed by the discourse and not the mere presence of prime words.
  • Ozyurek, A., Kita, S., Allen, S., Brown, A., Furman, R., & Ishizuka, T. (2008). Development of cross-linguistic variation in speech and gesture: motion events in English and Turkish. Developmental Psychology, 44(4), 1040-1054. doi:10.1037/0012-1649.44.4.1040.

    Abstract

    The way adults express manner and path components of a motion event varies across typologically different languages both in speech and cospeech gestures, showing that language specificity in event encoding influences gesture. The authors tracked when and how this multimodal cross-linguistic variation develops in children learning Turkish and English, 2 typologically distinct languages. They found that children learn to speak in language-specific ways from age 3 onward (i.e., English speakers used 1 clause and Turkish speakers used 2 clauses to express manner and path). In contrast, English- and Turkish-speaking children’s gestures looked similar at ages 3 and 5 (i.e., separate gestures for manner and path), differing from each other only at age 9 and in adulthood (i.e., English speakers used 1 gesture, but Turkish speakers used separate gestures for manner and path). The authors argue that this pattern of the development of cospeech gestures reflects a gradual shift to language-specific representations during speaking and shows that looking at speech alone may not be sufficient to understand the full process of language acquisition.
  • Patel, A. D., Iversen, J. R., Wassenaar, M., & Hagoort, P. (2008). Musical syntactic processing in agrammatic Broca's aphasia. Aphasiology, 22(7/8), 776-789. doi:10.1080/02687030701803804.

    Abstract

    Background: Growing evidence for overlap in the syntactic processing of language and music in non-brain-damaged individuals leads to the question of whether aphasic individuals with grammatical comprehension problems in language also have problems processing structural relations in music.

    Aims: The current study sought to test musical syntactic processing in individuals with Broca's aphasia and grammatical comprehension deficits, using both explicit and implicit tasks.

    Methods & Procedures: Two experiments were conducted. In the first experiment 12 individuals with Broca's aphasia (and 14 matched controls) were tested for their sensitivity to grammatical and semantic relations in sentences, and for their sensitivity to musical syntactic (harmonic) relations in chord sequences. An explicit task (acceptability judgement of novel sequences) was used. The second experiment, with 9 individuals with Broca's aphasia (and 12 matched controls), probed musical syntactic processing using an implicit task (harmonic priming).

    Outcomes & Results: In both experiments the aphasic group showed impaired processing of musical syntactic relations. Control experiments indicated that this could not be attributed to low-level problems with the perception of pitch patterns or with auditory short-term memory for tones.

    Conclusions: The results suggest that musical syntactic processing in agrammatic aphasia deserves systematic investigation, and that such studies could help probe the nature of the processing deficits underlying linguistic agrammatism. Methodological suggestions are offered for future work in this little-explored area.
  • Scheeringa, R., Bastiaansen, M. C. M., Petersson, K. M., Oostenveld, R., Norris, D. G., & Hagoort, P. (2008). Frontal theta EEG activity correlates negatively with the default mode network in resting state. International Journal of Psychophysiology, 67, 242-251. doi:10.1016/j.ijpsycho.2007.05.017.

    Abstract

    We used simultaneously recorded EEG and fMRI to investigate in which areas the BOLD signal correlates with frontal theta power changes, while subjects were quietly lying resting in the scanner with their eyes open. To obtain a reliable estimate of frontal theta power we applied ICA on band-pass filtered (2–9 Hz) EEG data. For each subject we selected the component that best matched the mid-frontal scalp topography associated with the frontal theta rhythm. We applied a time-frequency analysis on this component and used the time course of the frequency bin with the highest overall power to form a regressor that modeled spontaneous fluctuations in frontal theta power. No significant positive BOLD correlations with this regressor were observed. Extensive negative correlations were observed in the areas that together form the default mode network. We conclude that frontal theta activity can be seen as an EEG index of default mode network activity.
  • Toni, I., De Lange, F. P., Noordzij, M. L., & Hagoort, P. (2008). Language beyond action. Journal of Physiology, 102, 71-79. doi:10.1016/j.jphysparis.2008.03.005.

    Abstract

    The discovery of mirror neurons in macaques and of a similar system in humans has provided a new and fertile neurobiological ground for rooting a variety of cognitive faculties. Automatic sensorimotor resonance has been invoked as the key elementary process accounting for disparate (dys)functions, like imitation, ideomotor apraxia, autism, and schizophrenia. In this paper, we provide a critical appraisal of three of these claims that deal with the relationship between language and the motor system. Does language comprehension require the motor system? Was there an evolutionary switch from manual gestures to speech as the primary mode of language? Is human communication explained by automatic sensorimotor resonances? A positive answer to these questions would open the tantalizing possibility of bringing language and human communication within the fold of the motor system. We argue that the available empirical evidence does not appear to support these claims, and their theoretical scope fails to account for some crucial features of the phenomena they are supposed to explain. Without denying the enormous importance of the discovery of mirror neurons, we highlight the limits of their explanatory power for understanding language and communication.
  • Uddén, J., Folia, V., Forkstam, C., Ingvar, M., Fernández, G., Overeem, S., Van Elswijk, G., Hagoort, P., & Petersson, K. M. (2008). The inferior frontal cortex in artificial syntax processing: An rTMS study. Brain Research, 1224, 69-78. doi:10.1016/j.brainres.2008.05.070.

    Abstract

    The human capacity to implicitly acquire knowledge of structured sequences has recently been investigated in artificial grammar learning using functional magnetic resonance imaging. It was found that the left inferior frontal cortex (IFC; Brodmann's area (BA) 44/45) was related to classification performance. The objective of this study was to investigate whether the IFC (BA 44/45) is causally related to classification of artificial syntactic structures by means of an off-line repetitive transcranial magnetic stimulation (rTMS) paradigm. We manipulated the stimulus material in a 2 × 2 factorial design with grammaticality status and local substring familiarity as factors. The participants showed a reliable effect of grammaticality on classification of novel items after 5days of exposure to grammatical exemplars without performance feedback in an implicit acquisition task. The results show that rTMS of BA 44/45 improves syntactic classification performance by increasing the rejection rate of non-grammatical items and by shortening reaction times of correct rejections specifically after left-sided stimulation. A similar pattern of results is observed in FMRI experiments on artificial syntactic classification. These results suggest that activity in the inferior frontal region is causally related to artificial syntax processing.
  • Van Berkum, J. J. A., Van den Brink, D., Tesink, C. M. J. Y., Kos, M., & Hagoort, P. (2008). The neural integration of speaker and message. Journal of Cognitive Neuroscience, 20(4), 580-591. doi:10.1162/jocn.2008.20054.

    Abstract

    When do listeners take into account who the speaker is? We asked people to listen to utterances whose content sometimes did not match inferences based on the identity of the speaker (e.g., “If only I looked like Britney Spears” in a male voice, or “I have a large tattoo on my back” spoken with an upper-class accent). Event-related brain responses revealed that the speaker's identity is taken into account as early as 200–300 msec after the beginning of a spoken word, and is processed by the same early interpretation mechanism that constructs sentence meaning based on just the words. This finding is difficult to reconcile with standard “Gricean” models of sentence interpretation in which comprehenders initially compute a local, context-independent meaning for the sentence (“semantics”) before working out what it really means given the wider communicative context and the particular speaker (“pragmatics”). Because the observed brain response hinges on voice-based and usually stereotype-dependent inferences about the speaker, it also shows that listeners rapidly classify speakers on the basis of their voices and bring the associated social stereotypes to bear on what is being said. According to our event-related potential results, language comprehension takes very rapid account of the social context, and the construction of meaning based on language alone cannot be separated from the social aspects of language use. The linguistic brain relates the message to the speaker immediately.
  • Van Berkum, J. J. A. (2008). Understanding sentences in context: What brain waves can tell us. Current Directions in Psychological Science, 17(6), 376-380. doi:10.1111/j.1467-8721.2008.00609.x.

    Abstract

    Language comprehension looks pretty easy. You pick up a novel and simply enjoy the plot, or ponder the human condition. You strike a conversation and listen to whatever the other person has to say. Although what you're taking in is a bunch of letters and sounds, what you really perceive—if all goes well—is meaning. But how do you get from one to the other so easily? The experiments with brain waves (event-related brain potentials or ERPs) reviewed here show that the linguistic brain rapidly draws upon a wide variety of information sources, including prior text and inferences about the speaker. Furthermore, people anticipate what might be said about whom, they use heuristics to arrive at the earliest possible interpretation, and if it makes sense, they sometimes even ignore the grammar. Language comprehension is opportunistic, proactive, and, above all, immediately context-dependent.
  • Van Heuven, W. J. B., Schriefers, H., Dijkstra, T., & Hagoort, P. (2008). Language conflict in the bilingual brain. Cerebral Cortex, 18(11), 2706-2716. doi:10.1093/cercor/bhn030.

    Abstract

    The large majority of humankind is more or less fluent in 2 or even more languages. This raises the fundamental question how the language network in the brain is organized such that the correct target language is selected at a particular occasion. Here we present behavioral and functional magnetic resonance imaging data showing that bilingual processing leads to language conflict in the bilingual brain even when the bilinguals’ task only required target language knowledge. This finding demonstrates that the bilingual brain cannot avoid language conflict, because words from the target and nontarget languages become automatically activated during reading. Importantly, stimulus-based language conflict was found in brain regions in the LIPC associated with phonological and semantic processing, whereas response-based language conflict was only found in the pre-supplementary motor area/anterior cingulate cortex when language conflict leads to response conflicts.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2008). Seeing and hearing meaning: ERP and fMRI evidence of word versus picture integration into a sentence context. Journal of Cognitive Neuroscience, 20, 1235-1249. doi:10.1162/jocn.2008.20085.

    Abstract

    Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.
  • Willems, R. M., Oostenveld, R., & Hagoort, P. (2008). Early decreases in alpha and gamma band power distinguish linguistic from visual information during spoken sentence comprehension. Brain Research, 1219, 78-90. doi:10.1016/j.brainres.2008.04.065.

    Abstract

    Language is often perceived together with visual information. This raises the question on how the brain integrates information conveyed in visual and/or linguistic format during spoken language comprehension. In this study we investigated the dynamics of semantic integration of visual and linguistic information by means of time-frequency analysis of the EEG signal. A modified version of the N400 paradigm with either a word or a picture of an object being semantically incongruous with respect to the preceding sentence context was employed. Event-Related Potential (ERP) analysis showed qualitatively similar N400 effects for integration of either word or picture. Time-frequency analysis revealed early specific decreases in alpha and gamma band power for linguistic and visual information respectively. We argue that these reflect a rapid context-based analysis of acoustic (word) or visual (picture) form information. We conclude that although full semantic integration of linguistic and visual information occurs through a common mechanism, early differences in oscillations in specific frequency bands reflect the format of the incoming information and, importantly, an early context-based detection of its congruity with respect to the preceding language context
  • Li, X., Yang, Y., & Hagoort, P. (2008). Pitch accent and lexical tone processing in Chinese discourse comprehension: An ERP study. Brain Research, 1222, 192-200. doi:10.1016/j.brainres.2008.05.031.

    Abstract

    In the present study, event-related brain potentials (ERP) were recorded to investigate the role of pitch accent and lexical tone in spoken discourse comprehension. Chinese was used as material to explore the potential difference in the nature and time course of brain responses to sentence meaning as indicated by pitch accent and to lexical meaning as indicated by tone. In both cases, the pitch contour of critical words was varied. The results showed that both inconsistent pitch accent and inconsistent lexical tone yielded N400 effects, and there was no interaction between them. The negativity evoked by inconsistent pitch accent had the some topography as that evoked by inconsistent lexical tone violation, with a maximum over central–parietal electrodes. Furthermore, the effect for the combined violations was the sum of effects for pure pitch accent and pure lexical tone violation. However, the effect for the lexical tone violation appeared approximately 90 ms earlier than the effect of the pitch accent violation. It is suggested that there might be a correspondence between the neural mechanism underlying pitch accent and lexical meaning processing in context. They both reflect the integration of the current information into a discourse context, independent of whether the current information was sentence meaning indicated by accentuation, or lexical meaning indicated by tone. In addition, lexical meaning was processed earlier than sentence meaning conveyed by pitch accent during spoken language processing.
  • Allen, S., Ozyurek, A., Kita, S., Brown, A., Furman, R., Ishizuka, T., & Fujii, M. (2007). Language-specific and universal influences in children's syntactic packaging of manner and path: A comparison of English, Japanese, and Turkish. Cognition, 102, 16-48. doi:10.1016/j.cognition.2005.12.006.

    Abstract

    Different languages map semantic elements of spatial relations onto different lexical and syntactic units. These crosslinguistic differences raise important questions for language development in terms of how this variation is learned by children. We investigated how Turkish-, English-, and Japanese-speaking children (mean age 3;8) package the semantic elements of Manner and Path onto syntactic units when both the Manner and the Path of the moving Figure occur simultaneously and are salient in the event depicted. Both universal and language-specific patterns were evident in our data. Children used the semantic-syntactic mappings preferred by adult speakers of their own languages, and even expressed subtle syntactic differences that encode different relations between Manner and Path in the same way as their adult counterparts (i.e., Manner causing vs. incidental to Path). However, not all types of semantics-syntax mappings were easy for children to learn (e.g., expressing Manner and Path elements in two verbal clauses). In such cases, Turkish- and Japanese-speaking children frequently used syntactic patterns that were not typical in the target language but were similar to patterns used by English-speaking children, suggesting some universal influence. Thus, both language-specific and universal tendencies guide the development of complex spatial expressions.
  • Bramão, I., Mendonça, A., Faísca, L., Ingvar, M., Petersson, K. M., & Reis, A. (2007). The impact of reading and writing skills on a visuo-motor integration task: A comparison between illiterate and literate subjects. Journal of the International Neuropsychological Society, 13(2), 359-364. doi:10.1017/S1355617707070440.

    Abstract

    Previous studies have shown a significant association between reading skills and the performance on visuo-motor tasks. In order to clarify whether reading and writing skills modulate non-linguistic domains, we investigated the performance of two literacy groups on a visuo-motor integration task with non-linguistic stimuli. Twenty-one illiterate participants and twenty matched literate controls were included in the experiment. Subjects were instructed to use the right or the left index finger to point to and touch a randomly presented target on the right or left side of a touch screen. The results showed that the literate subjects were significantly faster in detecting and touching targets on the left compared to the right side of the screen. In contrast, the presentation side did not affect the performance of the illiterate group. These results lend support to the idea that having acquired reading and writing skills, and thus a preferred left-to-right reading direction, influences visual scanning. (JINS, 2007, 13, 359–364
  • Furman, R., & Ozyurek, A. (2007). Development of interactional discourse markers: Insights from Turkish children's and adults' narratives. Journal of Pragmatics, 39(10), 1742-1757. doi:10.1016/j.pragma.2007.01.008.

    Abstract

    Discourse markers (DMs) are linguistic elements that index different relations and coherence between units of talk (Schiffrin, Deborah, 1987. Discourse Markers. Cambridge University Press, Cambridge). Most research on the development of these forms has focused on conversations rather than narratives and furthermore has not directly compared children's use of DMs to adult usage. This study examines the development of three DMs (şey ‘uuhh’, yani ‘I mean’, işte ‘y’know’) that mark interactional levels of discourse in oral Turkish narratives in 60 Turkish children (3-, 5- and 9-year-olds) and 20 Turkish-speaking adults. The results show that the frequency and functions of DMs change with age. Children learn şey, which mainly marks exchange level structures, earliest. However, yani and işte have multi-functions such as marking both information states and participation frameworks and are consequently learned later. Children also use DMs with different functions than adults. Overall, the results show that learning to use interactional DMs in narratives is complex and goes beyond age 9, especially for multi-functional DMs that index an interplay of discourse coherence at different levels.
  • Gisselgard, J., Uddén, J., Ingvar, M., & Petersson, K. M. (2007). Disruption of order information by irrelevant items: A serial recognition paradigm. Acta Psychologica, 124(3), 356-369. doi:10.1016/j.actpsy.2006.04.002.

    Abstract

    Irrelevant speech effect (ISE) is defined as a decrement in visually presented digit-list short-term memory performance due to exposure to irrelevant auditory material. Perhaps the most successful theoretical explanation of the effect is the changing state hypothesis. This hypothesis explains the effect in terms of confusion between amodal serial order cues, and represents a view based on the interference caused by the processing of similar order information of the visual and auditory materials. An alternative view suggests that the interference occurs as a consequence of the similarity between the visual and auditory contents of the stimuli. An important argument for the former view is the observation that ISE is almost exclusively observed in tasks that require memory for serial order. However, most short-term memory tasks require that both item and order information be retained in memory. An ideal task to investigate the sensitivity of maintenance of serial order to irrelevant speech would be one that calls upon order information but not item information. One task that is particularly suited to address this issue is serial recognition. In a typical serial recognition task, a list of items is presented and then probed by the same list in which the order of two adjacent items has been transposed. Due to the re-presentation of the encoding string, serial recognition requires primarily the serial order to be maintained while the content of the presented items is deemphasized. In demonstrating a highly significant ISE of changing versus steady-state auditory items in a serial recognition task, the present finding lends support for and extends previous empirical findings suggesting that irrelevant speech has the potential to interfere with the coding of the order of the items to be memorized.
  • Hagoort, P., & Van Berkum, J. J. A. (2007). Beyond the sentence given. Philosophical Transactions of the Royal Society. Series B: Biological Sciences, 362, 801-811.

    Abstract

    A central and influential idea among researchers of language is that our language faculty is organized according to Fregean compositionality, which states that the meaning of an utterance is a function of the meaning of its parts and of the syntactic rules by which these parts are combined. Since the domain of syntactic rules is the sentence, the implication of this idea is that language interpretation takes place in a two-step fashion. First, the meaning of a sentence is computed. In a second step, the sentence meaning is integrated with information from prior discourse, world knowledge, information about the speaker and semantic information from extra-linguistic domains such as co-speech gestures or the visual world. Here, we present results from recordings of event-related brain potentials that are inconsistent with this classical two-step model of language interpretation. Our data support a one-step model in which knowledge about the context and the world, concomitant information from other modalities, and the speaker are brought to bear immediately, by the same fast-acting brain system that combines the meanings of individual words into a message-level representation. Underlying the one-step model is the immediacy assumption, according to which all available information will immediately be used to co-determine the interpretation of the speaker's message. Functional magnetic resonance imaging data that we collected indicate that Broca's area plays an important role in semantic unification. Language comprehension involves the rapid incorporation of information in a 'single unification space', coming from a broader range of cognitive domains than presupposed in the standard two-step model of interpretation.
  • Hald, L. A., Steenbeek-Planting, E. G., & Hagoort, P. (2007). The interaction of discourse context and world knowledge in online sentence comprehension: Evidence from the N400. Brain Research, 1146, 210-218. doi:10.1016/j.brainres.2007.02.054.

    Abstract

    In an ERP experiment we investigated how the recruitment and integration of world knowledge information relate to the integration of information within a current discourse context. Participants were presented with short discourse contexts which were followed by a sentence that contained a critical word that was correct or incorrect based on general world knowledge and the supporting discourse context, or was more or less acceptable based on the combination of general world knowledge and the specific local discourse context. Relative to the critical word in the correct world knowledge sentences following a neutral discourse, all other critical words elicited an N400 effect that began at about 300 ms after word onset. However, the magnitude of the N400 effect varied in a way that suggests an interaction between world knowledge and discourse context. The results indicate that both world knowledge and discourse context have an effect on sentence interpretation, but neither overrides the other.
  • Janzen, G., Wagensveld, B., & Van Turennout, M. (2007). Neural representation of navigational relevance is rapidly induced and long lasting. Cerebral Cortex, 17(4), 975-981. doi:10.1093/cercor/bhl008.

    Abstract

    Successful navigation is facilitated by the presence of landmarks. Previous functional magnetic resonance imaging (fMRI) evidence indicated that the human parahippocampal gyrus automatically distinguishes between landmarks placed at navigationally relevant (decision points) and irrelevant locations (nondecision points). This storage of navigational relevance can provide a neural mechanism underlying successful navigation. However, an efficient wayfinding mechanism requires that important spatial information is learned quickly and maintained over time. The present study investigates whether the representation of navigational relevance is modulated by time and practice. Participants learned 2 film sequences through virtual mazes containing objects at decision and at nondecision points. One maze was shown one time, and the other maze was shown 3 times. Twenty-four hours after study, event-related fMRI data were acquired during recognition of the objects. The results showed that activity in the parahippocampal gyrus was increased for objects previously placed at decision points as compared with objects placed at nondecision points. The decision point effect was not modulated by the number of exposures to the mazes and independent of explicit memory functions. These findings suggest a persistent representation of navigationally relevant information, which is stable after only one exposure to an environment. These rapidly induced and long-lasting changes in object representation provide a basis for successful wayfinding.
  • Janzen, G., & Weststeijn, C. G. (2007). Neural representation of object location and route direction: An event-related fMRI study. Brain Research, 1165, 116-125. doi:10.1016/j.brainres.2007.05.074.

    Abstract

    The human brain distinguishes between landmarks placed at navigationally relevant and irrelevant locations. However, to provide a successful wayfinding mechanism not only landmarks but also the routes between them need to be stored. We examined the neural representation of a memory for route direction and a memory for relevant landmarks. Healthy human adults viewed objects along a route through a virtual maze. Event-related functional magnetic resonance imaging (fMRI) data were acquired during a subsequent subliminal priming recognition task. Prime-objects either preceded or succeeded a target-object on a preciously learned route. Our results provide evidence that the parahippocampal gyri distinguish between relevant and irrelevant landmarks whereas the inferior parietal gyrus, the anterior cingulate gyrus as well as the right caudate nucleus are involved in the coding of route direction. These data show that separated memory systems store different spatial information. A memory for navigationally relevant object information and a memory for route direction exist.
  • Kelly, S. D., & Ozyurek, A. (Eds.). (2007). Gesture, language, and brain [Special Issue]. Brain and Language, 101(3).
  • Kita, S., Ozyurek, A., Allen, S., Brown, A., Furman, R., & Ishizuka, T. (2007). Relations between syntactic encoding and co-speech gestures: Implications for a model of speech and gesture production. Language and Cognitive Processes, 22(8), 1212-1236. doi:10.1080/01690960701461426.

    Abstract

    Gestures that accompany speech are known to be tightly coupled with speech production. However little is known about the cognitive processes that underlie this link. Previous cross-linguistic research has provided preliminary evidence for online interaction between the two systems based on the systematic co-variation found between how different languages syntactically package Manner and Path information of a motion event and how gestures represent Manner and Path. Here we elaborate on this finding by testing whether speakers within the same language gesturally express Manner and Path differently according to their online choice of syntactic packaging of Manner and Path, or whether gestural expression is pre-determined by a habitual conceptual schema congruent with the linguistic typology. Typologically congruent and incongruent syntactic structures for expressing Manner and Path (i.e., in a single clause or multiple clauses) were elicited from English speakers. We found that gestural expressions were determined by the online choice of syntactic packaging rather than by a habitual conceptual schema. It is therefore concluded that speech and gesture production processes interface online at the conceptual planning phase. Implications of the findings for models of speech and gesture production are discussed
  • Marklund, P., Fransson, P., Cabeza, R., Petersson, K. M., Ingvar, M., & Nyberg, L. (2007). Sustained and transient neural modulations in prefrontal cortex related to declarative long-term memory, working memory, and attention. Cortex, 43(1), 22-37. doi:10.1016/S0010-9452(08)70443-X.

    Abstract

    Common activations in prefrontal cortex (PFC) during episodic and semantic long-term memory (LTM) tasks have been hypothesized to reflect functional overlap in terms of working memory (WM) and cognitive control. To evaluate a WM account of LTM-general activations, the present study took into consideration that cognitive task performance depends on the dynamic operation of multiple component processes, some of which are stimulus-synchronous and transient in nature; and some that are engaged throughout a task in a sustained fashion. PFC and WM may be implicated in both of these temporally independent components. To elucidate these possibilities we employed mixed blocked/event-related functional magnetic resonance imaging (fMRI) procedures to assess the extent to which sustained or transient activation patterns overlapped across tasks indexing episodic and semantic LTM, attention (ATT), and WM. Within PFC, ventrolateral and medial areas exhibited sustained activity across all tasks, whereas more anterior regions including right frontopolar cortex were commonly engaged in sustained processing during the three memory tasks. These findings do not support a WM account of sustained frontal responses during LTM tasks, but instead suggest that the pattern that was common to all tasks reflects general attentional set/vigilance, and that the shared WM-LTM pattern mediates control processes related to upholding task set. Transient responses during the three memory tasks were assessed relative to ATT to isolate item-specific mnemonic processes and were found to be largely distinct from sustained effects. Task-specific effects were observed for each memory task. In addition, a common item response for all memory tasks involved left dorsolateral PFC (DLPFC). The latter response might be seen as reflecting WM processes during LTM retrieval. Thus, our findings suggest that a WM account of shared PFC recruitment in LTM tasks holds for common transient item-related responses rather than sustained state-related responses that are better seen as reflecting more general attentional/control processes.
  • Menenti, L., & Burani, C. (2007). What causes the effect of age of acquisition in lexical processing? Quarterly Journal of Experimental Psychology, 60(5), 652-660. doi:10.1080/17470210601100126.

    Abstract

    Three hypotheses for effects of age of acquisition (AoA) in lexical processing are compared: the cumulative frequency hypothesis (frequency and AoA both influence the number of encounters with a word, which influences processing speed), the semantic hypothesis (early-acquired words are processed faster because they are more central in the semantic network), and the neural network model (early-acquired words are faster because they are acquired when a network has maximum plasticity). In a regression study of lexical decision (LD) and semantic categorization (SC) in Italian and Dutch, contrary to the cumulative frequency hypothesis, AoA coefficients were larger than frequency coefficients, and, contrary to the semantic hypothesis, the effect of AoA was not larger in SC than in LD. The neural network model was supported.
  • Nieuwland, M. S., Petersson, K. M., & Van Berkum, J. J. A. (2007). On sense and reference: Examining the functional neuroanatomy of referential processing. NeuroImage, 37(3), 993-1004. doi:10.1016/j.neuroimage.2007.05.048.

    Abstract

    In an event-related fMRI study, we examined the cortical networks involved in establishing reference during language comprehension. We compared BOLD responses to sentences containing referentially ambiguous pronouns (e.g., “Ronald told Frank that he…”), referentially failing pronouns (e.g., “Rose told Emily that he…”) or coherent pronouns. Referential ambiguity selectively recruited medial prefrontal regions, suggesting that readers engaged in problem-solving to select a unique referent from the discourse model. Referential failure elicited activation increases in brain regions associated with morpho-syntactic processing, and, for those readers who took failing pronouns to refer to unmentioned entities, additional regions associated with elaborative inferencing were observed. The networks activated by these two referential problems did not overlap with the network activated by a standard semantic anomaly. Instead, we observed a double dissociation, in that the systems activated by semantic anomaly are deactivated by referential ambiguity, and vice versa. This inverse coupling may reflect the dynamic recruitment of semantic and episodic processing to resolve semantically or referentially problematic situations. More generally, our findings suggest that neurocognitive accounts of language comprehension need to address not just how we parse a sentence and combine individual word meanings, but also how we determine who's who and what's what during language comprehension.
  • Nieuwland, M. S., Otten, M., & Van Berkum, J. J. A. (2007). Who are you talking about? Tracking discourse-level referential processing with event-related brain potentials. Journal of Cognitive Neuroscience, 19(2), 228-236. doi:10.1162/jocn.2007.19.2.228.

    Abstract

    In this event-related brain potentials (ERPs) study, we explored the possibility to selectively track referential ambiguity during spoken discourse comprehension. Earlier ERP research has shown that referentially ambiguous nouns (e.g., “the girl” in a two-girl context) elicit a frontal, sustained negative shift relative to unambiguous control words. In the current study, we examined whether this ERP effect reflects “deep” situation model ambiguity or “superficial” textbase ambiguity. We contrasted these different interpretations by investigating whether a discourse-level semantic manipulation that prevents referential ambiguity also averts the elicitation of a referentially induced ERP effect. We compared ERPs elicited by nouns that were referentially nonambiguous but were associated with two discourse entities (e.g., “the girl” with two girls introduced in the context, but one of which has died or left the scene), with referentially ambiguous and nonambiguous control words. Although temporally referentially ambiguous nouns elicited a frontal negative shift compared to control words, the “double bound” but referentially nonambiguous nouns did not. These results suggest that it is possible to selectively track referential ambiguity with ERPs at the level that is most relevant to discourse comprehension, the situation model.
  • Otten, M., & Van Berkum, J. J. A. (2007). What makes a discourse constraining? Comparing the effects of discourse message and scenario fit on the discourse-dependent N400 effect. Brain Research, 1153, 166-177. doi:10.1016/j.brainres.2007.03.058.

    Abstract

    A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more `loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Snijders, T. M., Kooijman, V., Cutler, A., & Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Research, 1178, 106-113. doi:10.1016/j.brainres.2007.07.080.

    Abstract

    Previous studies have shown that segmentation skills are language-specific, making it difficult to segment continuous speech in an unfamiliar language into its component words. Here we present the first study capturing the delay in segmentation and recognition in the foreign listener using ERPs. We compared the ability of Dutch adults and of English adults without knowledge of Dutch (‘foreign listeners’) to segment familiarized words from continuous Dutch speech. We used the known effect of repetition on the event-related potential (ERP) as an index of recognition of words in continuous speech. Our results show that word repetitions in isolation are recognized with equivalent facility by native and foreign listeners, but word repetitions in continuous speech are not. First, words familiarized in isolation are recognized faster by native than by foreign listeners when they are repeated in continuous speech. Second, when words that have previously been heard only in a continuous-speech context re-occur in continuous speech, the repetition is detected by native listeners, but is not detected by foreign listeners. A preceding speech context facilitates word recognition for native listeners, but delays or even inhibits word recognition for foreign listeners. We propose that the apparent difference in segmentation rate between native and foreign listeners is grounded in the difference in language-specific skills available to the listeners.
  • Takashima, A., Nieuwenhuis, I. L. C., Rijpkema, M., Petersson, K. M., Jensen, O., & Fernández, G. (2007). Memory trace stabilization leads to large-scale changes in the retrieval network: A functional MRI study on associative memory. Learning & Memory, 14, 472-479. doi:10.1101/lm.605607.

    Abstract

    Spaced learning with time to consolidate leads to more stabile memory traces. However, little is known about the neural correlates of trace stabilization, especially in humans. The present fMRI study contrasted retrieval activity of two well-learned sets of face-location associations, one learned in a massed style and tested on the day of learning (i.e., labile condition) and another learned in a spaced scheme over the course of one week (i.e., stabilized condition). Both sets of associations were retrieved equally well, but the retrieval of stabilized association was faster and accompanied by large-scale changes in the network supporting retrieval. Cued recall of stabilized as compared with labile associations was accompanied by increased activity in the precuneus, the ventromedial prefrontal cortex, the bilateral temporal pole, and left temporo–parietal junction. Conversely, memory representational areas such as the fusiform gyrus for faces and the posterior parietal cortex for locations did not change their activity with stabilization. The changes in activation in the precuneus, which also showed increased connectivity with the fusiform area, are likely to be related to the spatial nature of our task. The activation increase in the ventromedial prefrontal cortex, on the other hand, might reflect a general function in stabilized memory retrieval. This area might succeed the hippocampus in linking distributed neocortical representations.
  • Tendolkar, I., Arnold, J., Petersson, K. M., Weis, S., Brockhaus-Dumke, A., Van Eijndhoven, P., Buitelaar, J., & Fernández, G. (2007). Probing the neural correlates of associative memory formation: A parametrically analyzed event-related functional MRI study. Brain Research, 1142, 159-168. doi:10.1016/j.brainres.2007.01.040.

    Abstract

    The medial temporal lobe (MTL) is crucial for declarative memory formation, but the function of its subcomponents in associative memory formation remains controversial. Most functional imaging studies on this topic are based on a stepwise approach comparing a condition with and one without associative encoding. Extending this approach we applied additionally a parametric analysis by varying the amount of associative memory formation. We found a hippocampal subsequent memory effect of almost similar magnitude regardless of the amount of associations formed. By contrast, subsequent memory effects in rhinal and parahippocampal cortices were parametrically and positively modulated by the amount of associations formed. Our results indicate that the parahippocampal region supports associative memory formation as tested here and the hippocampus adds a general mnemonic operation. This pattern of results might suggest a new interpretation. Instead of having either a fixed division of labor between the hippocampus (associative memory formation) and the rhinal cortex (non-associative memory formation) or a functionally unitary MTL system, in which all substructures are contributing to memory formation in a similar way, we propose that the location where associations are formed within the MTL depends on the kind of associations bound: If visual single-dimension associations, as used here, can already be integrated within the parahippocampal region, the hippocampus might add a general purpose mnemonic operation only. In contrast, if associations have to be formed across widely distributed neocortical representations, the hippocampus may provide a binding operation in order to establish a coherent memory.
  • Van Berkum, J. J. A., Koornneef, A. W., Otten, M., & Nieuwland, M. S. (2007). Establishing reference in language comprehension: An electrophysiological perspective. Brain Research, 1146, 158-171. doi:10.1016/j.brainres.2006.06.091.

    Abstract

    The electrophysiology of language comprehension has long been dominated by research on syntactic and semantic integration. However, to understand expressions like "he did it" or "the little girl", combining word meanings in accordance with semantic and syntactic constraints is not enough--readers and listeners also need to work out what or who is being referred to. We review our event-related brain potential research on the processes involved in establishing reference, and present a new experiment in which we examine when and how the implicit causality associated with specific interpersonal verbs affects the interpretation of a referentially ambiguous pronoun. The evidence suggests that upon encountering a singular noun or pronoun, readers and listeners immediately inspect their situation model for a suitable discourse entity, such that they can discriminate between having too many, too few, or exactly the right number of referents within at most half a second. Furthermore, our implicit causality findings indicate that a fragment like "David praised Linda because..." can immediately foreground a particular referent, to the extent that a subsequent "he" is at least initially construed as a syntactic error. In all, our brain potential findings suggest that referential processing is highly incremental, and not necessarily contingent upon the syntax. In addition, they demonstrate that we can use ERPs to relatively selectively keep track of how readers and listeners establish reference.
  • Wassenaar, M., & Hagoort, P. (2007). Thematic role assignment in patients with Broca's aphasia: Sentence-picture matching electrified. Neuropsychologia, 45(4), 716-740. doi:10.1016/j.neuropsychologia.2006.08.016.

    Abstract

    An event-related brain potential experiment was carried out to investigate on-line thematic role assignment during sentence–picture matching in patients with Broca's aphasia. Subjects were presented with a picture that was followed by an auditory sentence. The sentence either matched the picture or mismatched the visual information depicted. Sentences differed in complexity, and ranged from simple active semantically irreversible sentences to passive semantically reversible sentences. ERPs were recorded while subjects were engaged in sentence–picture matching. In addition, reaction time and accuracy were measured. Three groups of subjects were tested: Broca patients (N = 10), non-aphasic patients with a right hemisphere (RH) lesion (N = 8), and healthy aged-matched controls (N = 15). The results of this study showed that, in neurologically unimpaired individuals, thematic role assignment in the context of visual information was an immediate process. This in contrast to patients with Broca's aphasia who demonstrated no signs of on-line sensitivity to the picture–sentence mismatches. The syntactic contribution to the thematic role assignment process seemed to be diminished given the reduction and even absence of P600 effects. Nevertheless, Broca patients showed some off-line behavioral sensitivity to the sentence–picture mismatches. The long response latencies of Broca's aphasics make it likely that off-line response strategies were used.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2007). When language meets action: The neural integration of gesture and speech. Cerebral Cortex, 17(10), 2322-2333. doi:10.1093/cercor/bhl141.

    Abstract

    Although generally studied in isolation, language and action often co-occur in everyday life. Here we investigated one particular form of simultaneous language and action, namely speech and gestures that speakers use in everyday communication. In a functional magnetic resonance imaging study, we identified the neural networks involved in the integration of semantic information from speech and gestures. Verbal and/or gestural content could be integrated easily or less easily with the content of the preceding part of speech. Premotor areas involved in action observation (Brodmann area [BA] 6) were found to be specifically modulated by action information "mismatching" to a language context. Importantly, an increase in integration load of both verbal and gestural information into prior speech context activated Broca's area and adjacent cortex (BA 45/47). A classical language area, Broca's area, is not only recruited for language-internal processing but also when action observation is integrated with speech. These findings provide direct evidence that action and language processing share a high-level neural integration system.
  • Willems, R. M., & Hagoort, P. (2007). Neural evidence for the interplay between language, gesture, and action: A review. Brain and Language, 101(3), 278-289. doi:10.1016/j.bandl.2007.03.004.

    Abstract

    Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neuroscience that examine the neural underpinnings of links between language and action. Topics include neurocognitive studies of motor representations of speech sounds, action-related language, sign language and co-speech gestures. It will be concluded that there is strong evidence on the interaction between speech and gestures in the brain. This interaction however shares general properties with other domains in which there is interplay between language and action.

Share this page