Peter Hagoort

Publications

Displaying 1 - 34 of 34
  • Dai, B., McQueen, J. M., Hagoort, P., & Kösem, A. (2017). Pure linguistic interference during comprehension of competing speech signals. The Journal of the Acoustical Society of America, 141, EL249-EL254. doi:10.1121/1.4977590.

    Abstract

    Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at −3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in‐speech comprehension
  • Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. In Proceedings of Interspeech 2017 (pp. 655-658). doi:10.21437/Interspeech.2017-122.

    Abstract

    One of the most daunting tasks of a listener is to map a
    continuous auditory stream onto known speech sound
    categories and lexical items. A major issue with this mapping
    problem is the variability in the acoustic realizations of sound
    categories, both within and across speakers. Past research has
    suggested listeners may use visual information (e.g., lipreading)
    to calibrate these speech categories to the current
    speaker. Previous studies have focused on audiovisual
    recalibration of consonant categories. The present study
    explores whether vowel categorization, which is known to show
    less sharply defined category boundaries, also benefit from
    visual cues.
    Participants were exposed to videos of a speaker
    pronouncing one out of two vowels, paired with audio that was
    ambiguous between the two vowels. After exposure, it was
    found that participants had recalibrated their vowel categories.
    In addition, individual variability in audiovisual recalibration is
    discussed. It is suggested that listeners’ category sharpness may
    be related to the weight they assign to visual information in
    audiovisual speech perception. Specifically, listeners with less
    sharp categories assign more weight to visual information
    during audiovisual speech recognition.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.

    Abstract

    An important part of understanding speech motor control consists of capturing the
    interaction between speech production and speech perception. This study tests a
    prediction of theoretical frameworks that have tried to account for these interactions: if
    speech production targets are specified in auditory terms, individuals with better
    auditory acuity should have more precise speech targets, evidenced by decreased
    within-phoneme variability and increased between-phoneme distance. A study was
    carried out consisting of perception and production tasks in counterbalanced order.
    Auditory acuity was assessed using an adaptive speech discrimination task, while
    production variability was determined using a pseudo-word reading task. Analyses of
    the production data were carried out to quantify average within-phoneme variability as
    well as average between-phoneme contrasts. Results show that individuals not only
    vary in their production and perceptual abilities, but that better discriminators have
    more distinctive vowel production targets (that is, targets with less within-phoneme
    variability and greater between-phoneme distances), confirming the initial hypothesis.
    This association between speech production and perception did not depend on local
    phoneme density in vowel space. This study suggests that better auditory acuity leads
    to more precise speech production targets, which may be a consequence of auditory
    feedback affecting speech production over time.
  • Hagoort, P. (2017). It is the facts, stupid. In J. Brockman, F. Van der Wa, & H. Corver (Eds.), Wetenschappelijke parels: het belangrijkste wetenschappelijke nieuws volgens 193 'briljante geesten'. Amsterdam: Maven Press.
  • Hagoort, P. (2017). Don't forget neurobiology: An experimental approach to linguistic representation. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e292. doi:10.1017/S0140525X17000401.

    Abstract

    Acceptability judgments are no longer acceptable as the holy grail for testing the nature of linguistic representations. Experimental and quantitative methods should be used to test theoretical claims in psycholinguistics. These methods should include not only behavior, but also the more recent possibilities to probe the neural codes for language-relevant representation
  • Hagoort, P. (2017). The core and beyond in the language-ready brain. Neuroscience and Biobehavioral Reviews, 81, 194-204. doi:10.1016/j.neubiorev.2017.01.048.

    Abstract

    In this paper a general cognitive architecture of spoken language processing is specified. This is followed by an account of how this cognitive architecture is instantiated in the human brain. Both the spatial aspects of the networks for language are discussed, as well as the temporal dynamics and the underlying neurophysiology. A distinction is proposed between networks for coding/decoding linguistic information and additional networks for getting from coded meaning to speaker meaning, i.e. for making the inferences that enable the listener to understand the intentions of the speaker
  • Hagoort, P. (2017). The neural basis for primary and acquired language skills. In E. Segers, & P. Van den Broek (Eds.), Developmental Perspectives in Written Language and Literacy: In honor of Ludo Verhoeven (pp. 17-28). Amsterdam: Benjamins. doi:10.1075/z.206.02hag.

    Abstract

    Reading is a cultural invention that needs to recruit cortical infrastructure that was not designed for it (cultural recycling of cortical maps). In the case of reading both visual cortex and networks for speech processing are recruited. Here I discuss current views on the neurobiological underpinnings of spoken language that deviate in a number of ways from the classical Wernicke-Lichtheim-Geschwind model. More areas than Broca’s and Wernicke’s region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication. The consequences of this architecture for reading are discussed.
  • Hartung, F., Hagoort, P., & Willems, R. M. (2017). Readers select a comprehension mode independent of pronoun: Evidence from fMRI during narrative comprehension. Brain and Language, 170, 29-38. doi:10.1016/j.bandl.2017.03.007.

    Abstract

    Perspective is a crucial feature for communicating about events. Yet it is unclear how linguistically encoded perspective relates to cognitive perspective taking. Here, we tested the effect of perspective taking with short literary stories. Participants listened to stories with 1st or 3rd person pronouns referring to the protagonist, while undergoing fMRI. When comparing action events with 1st and 3rd person pronouns, we found no evidence for a neural dissociation depending on the pronoun. A split sample approach based on the self-reported experience of perspective taking revealed 3 comprehension preferences. One group showed a strong 1st person preference, another a strong 3rd person preference, while a third group engaged in 1st and 3rd person perspective taking simultaneously. Comparing brain activations of the groups revealed different neural networks. Our results suggest that comprehension is perspective dependent, but not on the perspective suggested by the text, but on the reader’s (situational) preference
  • Hartung, F., Withers, P., Hagoort, P., & Willems, R. M. (2017). When fiction is just as real as fact: No differences in reading behavior between stories believed to be based on true or fictional events. Frontiers in Psychology, 8: 1618. doi:10.3389/fpsyg.2017.01618.

    Abstract

    Experiments have shown that compared to fictional texts, readers read factual texts faster and have better memory for described situations. Reading fictional texts on the other hand seems to improve memory for exact wordings and expressions. Most of these studies used a ‘newspaper’ versus ‘literature’ comparison. In the present study, we investigated the effect of reader’s expectation to whether information is true or fictional with a subtler manipulation by labelling short stories as either based on true or fictional events. In addition, we tested whether narrative perspective or individual preference in perspective taking affects reading true or fictional stories differently. In an online experiment, participants (final N=1742) read one story which was introduced as based on true events or as fictional (factor fictionality). The story could be narrated in either 1st or 3rd person perspective (factor perspective). We measured immersion in and appreciation of the story, perspective taking, as well as memory for events. We found no evidence that knowing a story is fictional or based on true events influences reading behavior or experiential aspects of reading. We suggest that it is not whether a story is true or fictional, but rather expectations towards certain reading situations (e.g. reading newspaper or literature) which affect behavior by activating appropriate reading goals. Results further confirm that narrative perspective partially influences perspective taking and experiential aspects of reading
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). How social opinion influences syntactic processing – An investigation using virtual reality. PLoS One, 12(4): e0174405. doi:10.1371/journal.pone.0174405.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). In dialogue with an avatar, language behavior is identical to dialogue with a human partner. Behavior Research Methods, 49(1), 46-60. doi:10.3758/s13428-015-0688-7.

    Abstract

    The use of virtual reality (VR) as a methodological tool is becoming increasingly popular in behavioral research as its flexibility allows for a wide range of applications. This new method has not been as widely accepted in the field of psycholinguistics, however, possibly due to the assumption that language processing during human-computer interactions does not accurately reflect human-human interactions. Yet at the same time there is a growing need to study human-human language interactions in a tightly controlled context, which has not been possible using existing methods. VR, however, offers experimental control over parameters that cannot be (as finely) controlled in the real world. As such, in this study we aim to show that human-computer language interaction is comparable to human-human language interaction in virtual reality. In the current study we compare participants’ language behavior in a syntactic priming task with human versus computer partners: we used a human partner, a human-like avatar with human-like facial expressions and verbal behavior, and a computer-like avatar which had this humanness removed. As predicted, our study shows comparable priming effects between the human and human-like avatar suggesting that participants attributed human-like agency to the human-like avatar. Indeed, when interacting with the computer-like avatar, the priming effect was significantly decreased. This suggests that when interacting with a human-like avatar, sentence processing is comparable to interacting with a human partner. Our study therefore shows that VR is a valid platform for conducting language research and studying dialogue interactions in an ecologically valid manner.
  • Heyselaar, E., Segaert, K., Walvoort, S. J., Kessels, R. P., & Hagoort, P. (2017). The role of nondeclarative memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Neuropsychologia, 101, 97-105. doi:10.1016/j.neuropsychologia.2017.04.033.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing patients with Korsakoff's amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education, and premorbid intelligence. Patients with Korsakoff's syndrome display deficits in all subdomains of declarative memory, yet their nondeclarative memory remains intact, making them an ideal patient group to determine which memory system supports syntactic priming. In line with the hypothesis that syntactic priming relies on nondeclarative memory, the patient group shows strong priming tendencies (12.6% passive structure repetition). Our healthy control group did not show a priming tendency, presumably due to cognitive interference between declarative and nondeclarative memory. We discuss the results in relation to amnesia, aging, and compensatory mechanisms.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2017). Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues. Neuropsychologia, 95, 21-29. doi:10.1016/j.neuropsychologia.2016.12.004.

    Abstract

    In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects.
  • Schoffelen, J.-M., Hulten, A., Lam, N. H. L., Marquand, A. F., Udden, J., & Hagoort, P. (2017). Frequency-specific directed interactions in the human brain network for language. Proceedings of the National Academy of Sciences of the United States of America, 114(30), 8083-8088. doi:10.1073/pnas.1703155114.

    Abstract

    The brain’s remarkable capacity for language requires bidirectional interactions between functionally specialized brain regions. We used magnetoencephalography to investigate interregional interactions in the brain network for language while 102 participants were reading sentences. Using Granger causality analysis, we identified inferior frontal cortex and anterior temporal regions to receive widespread input and middle temporal regions to send widespread output. This fits well with the notion that these regions play a central role in language processing. Characterization of the functional topology of this network, using data-driven matrix factorization, which allowed for partitioning into a set of subnetworks, revealed directed connections at distinct frequencies of interaction. Connections originating from temporal regions peaked at alpha frequency, whereas connections originating from frontal and parietal regions peaked at beta frequency. These findings indicate that the information flow between language-relevant brain areas, which is required for linguistic processing, may depend on the contributions of distinct brain rhythms

    Additional information

    pnas.201703155SI.pdf
  • Silva, S., Folia, V., Hagoort, P., & Petersson, K. M. (2017). The P600 in Implicit Artificial Grammar Learning. Cognitive Science, 41(1), 137-157. doi:10.1111/cogs.12343.

    Abstract

    The suitability of the Artificial Grammar Learning (AGL) paradigm to capture relevant aspects of the acquisition of linguistic structures has been empirically tested in a number of EEG studies. Some have shown a syntax-related P600 component, but it has not been ruled out that the AGL P600 effect is a response to surface features (e.g., subsequence familiarity) rather than the underlying syntax structure. Therefore, in this study, we controlled for the surface characteristics of the test sequences (associative chunk strength) and recorded the EEG before (baseline preference classification) and
    after (preference and grammaticality classification) exposure to a grammar. A typical, centroparietal P600 effect was elicited by grammatical violations after exposure, suggesting that the AGL P600 effect signals a response to structural irregularities. Moreover, preference and grammaticality classification showed a qualitatively similar ERP profile, strengthening the idea that the implicit structural mere
    exposure paradigm in combination with preference classification is a suitable alternative to the traditional grammaticality classification test.
  • Ye, Z., Stolk, A., Toni, I., & Hagoort, P. (2017). Oxytocin modulates semantic integration in speech comprehension. Journal of Cognitive Neuroscience, 29, 267-276. doi:10.1162/jocn_a_01044.

    Abstract

    Listeners interpret utterances by integrating information from multiple sources including word level semantics and world knowledge. When the semantics of an expression is inconsistent with his or her knowledge about the world, the listener may have to search through the conceptual space for alternative possible world scenarios that can make the expression more acceptable. Such cognitive exploration requires considerable computational resources and might depend on motivational factors. This study explores whether and how oxytocin, a neuropeptide known to influence socialmotivation by reducing social anxiety and enhancing affiliative tendencies, can modulate the integration of world knowledge and sentence meanings. The study used a betweenparticipant double-blind randomized placebo-controlled design. Semantic integration, indexed with magnetoencephalography through the N400m marker, was quantified while 45 healthymale participants listened to sentences that were either congruent or incongruent with facts of the world, after receiving intranasally delivered oxytocin or placebo. Compared with congruent sentences, world knowledge incongruent sentences elicited a stronger N400m signal from the left inferior frontal and anterior temporal regions and medial pFC (the N400m effect) in the placebo group. Oxytocin administration significantly attenuated the N400meffect at both sensor and cortical source levels throughout the experiment, in a state-like manner. Additional electrophysiological markers suggest that the absence of the N400m effect in the oxytocin group is unlikely due to the lack of early sensory or semantic processing or a general downregulation of attention. These findings suggest that oxytocin drives listeners to resolve challenges of semantic integration, possibly by promoting the cognitive exploration of alternative possible world scenarios.
  • Tsuji, S., Fikkert, P., Minagawa, Y., Dupoux, E., Filippin, L., Versteegh, M., Hagoort, P., & Cristia, A. (2017). The more, the better? Behavioral and neural correlates of frequent and infrequent vowel exposure. Developmental Psychobiology, 59, 603-612. doi:10.1002/dev.21534.

    Abstract

    A central assumption in the perceptual attunement literature holds that exposure to a speech sound contrast leads to improvement in native speech sound processing. However, whether the amount of exposure matters for this process has not been put to a direct test. We elucidated indicators of frequency-dependent perceptual attunement by comparing 5–8-month-old Dutch infants’ discrimination of tokens containing a highly frequent [hɪt-he:t] and a highly infrequent [hʏt-hø:t] native vowel contrast as well as a non-native [hɛt-hæt] vowel contrast in a behavioral visual habituation paradigm (Experiment 1). Infants discriminated both native contrasts similarly well, but did not discriminate the non-native contrast. We sought further evidence for subtle differences in the processing of the two native contrasts using near-infrared spectroscopy and a within-participant design (Experiment 2). The neuroimaging data did not provide additional evidence that responses to native contrasts are modulated by frequency of exposure. These results suggest that even large differences in exposure to a native contrast may not directly translate to behavioral and neural indicators of perceptual attunement, raising the possibility that frequency of exposure does not influence improvements in discriminating native contrasts.

    Additional information

    dev21534-sup-0001-SuppInfo-S1.docx
  • Udden, J., Ingvar, M., Hagoort, P., & Petersson, K. M. (2017). Broca’s region: A causal role in implicit processing of grammars with crossed non-adjacent dependencies. Cognition, 164, 188-198. doi:10.1016/j.cognition.2017.03.010.

    Abstract

    Non-adjacent dependencies are challenging for the language learning machinery and are acquired later than adjacent dependencies. In this transcranial magnetic stimulation (TMS) study, we show that participants successfully discriminated between grammatical and non-grammatical sequences after having implicitly acquired an artificial language with crossed non-adjacent dependencies. Subsequent to transcranial magnetic stimulation of Broca’s region, discrimination was impaired compared to when a language-irrelevant control region (vertex) was stimulated. These results support the view that Broca’s region is engaged in structured sequence processing and extend previous functional neuroimaging results on artificial grammar learning (AGL) in two directions: first, the results establish that Broca’s region is a causal component in the processing of non-adjacent dependencies, and second, they show that implicit processing of non-adjacent dependencies engages Broca’s region. Since patients with lesions in Broca’s region do not always show grammatical processing difficulties, the result that Broca’s region is causally linked to processing of non-adjacent dependencies is a step towards clarification of the exact nature of syntactic deficits caused by lesions or perturbation to Broca’s region. Our findings are consistent with previous results and support a role for Broca’s region in general structured sequence processing, rather than a specific role for the processing of hierarchically organized sentence structure.
  • Udden, J., Snijders, T. M., Fisher, S. E., & Hagoort, P. (2017). A common variant of the CNTNAP2 gene is associated with structural variation in the left superior occipital gyrus. Brain and Language, 172, 16-21. doi:10.1016/j.bandl.2016.02.003.

    Abstract

    The CNTNAP2 gene encodes a cell-adhesion molecule that influences the properties of neural networks and the morphology and density of neurons and glial cells. Previous studies have shown association of CNTNAP2 variants with language-related phenotypes in health and disease. Here, we report associations of a common CNTNAP2 polymorphism (rs7794745) with variation in grey matter in a region in the dorsal visual stream. We tried to replicate an earlier study on 314 subjects by Tan and colleagues (2010), but now in a substantially larger group of more than 1700 subjects. Carriers of the T allele showed reduced grey matter volume in left superior occipital gyrus, while we did not replicate associations with grey matter volume in other regions identified by Tan et al (2010). Our work illustrates the importance of independent replication in neuroimaging genetic studies of language-related candidate genes.
  • Brown, C. M., Hagoort, P., & Ter Keurs, M. (1999). Electrophysiological signatures of visual lexical processing: open en closed-class words. Journal of Cognitive Neuroscience, 11(3), 261-281.

    Abstract

    In this paper presents evidence of the disputed existence of an electrophysiological marker for the lexical-categorical distinction between open- and closed-class words. Event-related brain potentials were recorded from the scalp while subjects read a story. Separate waveforms were computed for open- and closed-class words. Two aspects of the waveforms could be reliably related to vocabulary class. The first was an early negativity in the 230- to 350-msec epoch, with a bilateral anterior predominance. This negativity was elicited by open- and closed-class words alike, was not affected by word frequency or word length, and had an earlier peak latency for closed-class words. The second was a frontal slow negative shift in the 350- to 500-msec epoch, largest over the left side of the scalp. This late negativity was only elicited by closed-class words. Although the early negativity cannot serve as a qualitative marker of the open- and closed-class distinction, it does reflect the earliest electrophysiological manifestation of the availability of categorical information from the mental lexicon. These results suggest that the brain honors the distinction between open- and closed-class words, in relation to the different roles that they play in on-line sentence processing.
  • Brown, C. M., & Hagoort, P. (1999). The cognitive neuroscience of language: Challenges and future directions. In C. M. Brown, & P. Hagoort (Eds.), The neurocognition of language (pp. 3-14). Oxford: Oxford University Press.
  • Hagoort, P. (1999). De toekomstige eeuw zonder psychologie. Psychologie Magazine, 18, 35-36.
  • Hagoort, P., & Brown, C. M. (1999). Gender electrified: ERP evidence on the syntactic nature of gender processing. Journal of Psycholinguistic Research, 28(6), 715-728. doi:10.1023/A:1023277213129.

    Abstract

    The central issue of this study concerns the claim that the processing of gender agreement in online sentence comprehension is a syntactic rather than a conceptual/semantic process. This claim was tested for the grammatical gender agreement in Dutch between the definite article and the noun. Subjects read sentences in which the definite article and the noun had the same gender and sentences in which the gender agreement was violated, While subjects read these sentences, their electrophysiological activity was recorded via electrodes placed on the scalp. Earlier research has shown that semantic and syntactic processing events manifest themselves in different event-related brain potential (ERP) effects. Semantic integration modulates the amplitude of the so-called N400.The P600/SPS is an ERP effect that is more sensitive to syntactic processes. The violation of grammatical gender agreement was found to result in a P600/SPS. For violations in sentence-final position, an additional increase of the N400 amplitude was observed. This N400 effect is interpreted as resulting from the consequence of a syntactic violation for the sentence-final wrap-up. The overall pattern of results supports the claim that the on-line processing of gender agreement information is not a content driven but a syntactic-form driven process.
  • Hagoort, P., & Brown, C. M. (1999). The consequences of the temporal interaction between syntactic and semantic processes for haemodynamic studies of language. NeuroImage, 9, S1024-S1024.
  • Hagoort, P., Brown, C. M., & Osterhout, L. (1999). The neurocognition of syntactic processing. In C. M. Brown, & P. Hagoort (Eds.), The neurocognition of language (pp. 273-317). Oxford: Oxford University Press.
  • Hagoort, P., Ramsey, N., Rutten, G.-J., & Van Rijen, P. (1999). The role of the left anterior temporal cortex in language processing. Brain and Language, 69, 322-325. doi:10.1006/brln.1999.2169.
  • Hagoort, P., Indefrey, P., Brown, C. M., Herzog, H., Steinmetz, H., & Seitz, R. J. (1999). The neural circuitry involved in the reading of german words and pseudowords: A PET study. Journal of Cognitive Neuroscience, 11(4), 383-398. doi:10.1162/089892999563490.

    Abstract

    Silent reading and reading aloud of German words and pseudowords were used in a PET study using (15O)butanol to examine the neural correlates of reading and of the phonological conversion of legal letter strings, with or without meaning.
    The results of 11 healthy, right-handed volunteers in the age range of 25 to 30 years showed activation of the lingual gyri during silent reading in comparison with viewing a fixation cross. Comparisons between the reading of words and pseudowords suggest the involvement of the middle temporal gyri in retrieving both the phonological and semantic code for words. The reading of pseudowords activates the left inferior frontal gyrus, including the ventral part of Broca’s area, to a larger extent than the reading of words. This suggests that this area might be involved in the sublexical conversion of orthographic input strings into phonological output codes. (Pre)motor areas were found to be activated during both silent reading and reading aloud. On the basis of the obtained activation patterns, it is hypothesized that the articulation of high-frequency syllables requires the retrieval of their concomitant articulatory gestures from the SMA and that the articulation of lowfrequency syllables recruits the left medial premotor cortex.
  • Hagoort, P. (1999). The uniquely human capacity for language communication: from 'pope' to [po:p] in half a second. In J. Russell, M. Murphy, T. Meyering, & M. Arbib (Eds.), Neuroscience and the person: Scientific perspectives on divine action (pp. 45-56). California: Berkeley.
  • Osterhout, L., & Hagoort, P. (1999). A superficial resemblance does not necessarily mean you are part of the family: Counterarguments to Coulson, King and Kutas (1998) in the P600/SPS-P300 debate. Language and Cognitive Processes, 14, 1-14. doi:10.1080/016909699386356.

    Abstract

    Two recent studies (Coulson et al., 1998;Osterhout et al., 1996)examined the
    relationship between the event-related brain potential (ERP) responses to linguistic syntactic anomalies (P600/SPS) and domain-general unexpected events (P300). Coulson et al. concluded that these responses are highly similar, whereas Osterhout et al. concluded that they are distinct. In this comment, we evaluate the relativemerits of these claims. We conclude that the available evidence indicates that the ERP response to syntactic anomalies is at least partially distinct from the ERP response to unexpected anomalies that do not involve a grammatical violation
  • Ter Keurs, M., Brown, C. M., Hagoort, P., & Stegeman, D. F. (1999). Electrophysiological manifestations of open- and closed-class words in patients with Broca's aphasia with agrammatic comprehension: An event-related brain potential study. Brain, 122, 839-854. doi:10.1093/brain/122.5.839.

    Abstract

    This paper presents electrophysiological data on the on-line processing of open- and closed-class words in patients with Broca’s aphasia with agrammatic comprehension. Event-related brain potentials were recorded from the scalp when Broca patients and nonaphasic control subjects were visually presented with a story in which the words appeared one at a time on the screen. Separate waveforms were computed for open- and closed-class words. The non-aphasic control subjects showed clear differences between the processing of open- and closed-class words in an early (210-375 ms) and a late (400-700 ms) time-window.The early electrophysiological differences reflect the first manifestation of the availability of word-category information from the mental lexicon. The late differences presumably relate to post-lexical semantic and syntactic processing. In contrast to the control subjects, the Broca patients showed no early vocabulary class effect and only a limited late effect. The results suggest that an important factor in the agrammatic comprehension deficit of Broca’s aphasics is a delayed and/or incomplete availability of word-class information.
  • Van Berkum, J. J. A., Brown, C. M., & Hagoort, P. (1999). Early referential context effects in sentence processing: Evidence from event-related brain potentials. Journal of Memory and Language, 41(2), 147-182. doi:10.1006/jmla.1999.2641.

    Abstract

    An event-related brain potentials experiment was carried out to examine the interplay of referential and structural factors during sentence processing in discourse. Subjects read (Dutch) sentences beginning like “David told the girl that … ” in short story contexts that had introduced either one or two referents for a critical singular noun phrase (“the girl”). The waveforms showed that within 280 ms after onset of the critical noun the reader had already determined whether the noun phrase had a unique referent in earlier discourse. Furthermore, this referential information was immediately used in parsing the rest of the sentence, which was briefly ambiguous between a complement clause (“ … that there would be some visitors”) and a relative clause (“ … that had been on the phone to hang up”). A consistent pattern of P600/SPS effects elicited by various subsequent disambiguations revealed that a two-referent discourse context had led the parser to initially pursue the relative-clause alternative to a larger extent than a one-referent context. Together, the results suggest that during the processing of sentences in discourse, structural and referential sources of information interact on a word-by-word basis.
  • Van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (1999). Semantic integration in sentences and discourse: Evidence from the N400. Journal of Cognitive Neuroscience, 11(6), 657-671. doi:10.1162/089892999563724.

    Abstract

    In two ERP experiments we investigated how and when the language comprehension system relates an incoming word to semantic representations of an unfolding local sentence and a wider discourse. In experiment 1, subjects were presented with short stories. The last sentence of these stories occasionally contained a critical word that, although acceptable in the local sentence context, was semantically anomalous with respect to the wider discourse (e.g., "Jane told the brother that he was exceptionally slow" in a discourse context where he had in fact been very quick). Relative to coherent control words (e.g., "quick"), these discourse-dependent semantic anomalies elicited a large N400 effect that began at about 200-250 ms after word onset. In experiment 2, the same sentences were presented without their original story context. Although the words that had previously been anomalous in discourse still elicited a slightly larger average N400 than the coherent words, the resulting N400 effect was much reduced, showing that the large effect observed in stories was related to the wider discourse. In the same experiment, single sentences that contained a clear local semantic anomaly elicited a standard sentence-dependent N400 effect (e.g., Kutas & Hillyard, 1980). The N400 effects elicited in discourse and in single sentences had the same time course, overall morphology, and scalp distribution. We argue that these findings are most compatible with models of language processing in which there is no fundamental distinction between the integration of a word in its local (sentence-level) and its global (discourse-level) semantic context.
  • Van Berkum, J. J. A., Brown, C. M., & Hagoort, P. (1999). When does gender constrain parsing? Evidence from ERPs. Journal of Psycholinguistic Research, 28(5), 555-566. doi:10.1023/A:1023224628266.

    Abstract

    We review the implications of recent ERP evidence for when and how grammatical gender agreement constrains sentence parsing. In some theories of parsing, gender is assumed to immediately and categorically block gender-incongruent phrase structure alternatives from being pursued. In other theories, the parser initially ignores gender altogether. The ERP evidence we discuss suggests an intermediate position, in which grammatical gender does not immediately block gender-incongruent phrase structures from being considered, but is used to dispose of them shortly thereafter.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1999). The time course of grammatical and phonological processing during speaking: evidence from event-related brain potentials. Journal of Psycholinguistic Research, 28(6), 649-676. doi:10.1023/A:1023221028150.

    Abstract

    Motor-related brain potentials were used to examine the time course of grammatical and phonological processes during noun phrase production in Dutch. In the experiments, participants named colored pictures using a no-determiner noun phrase. On half of the trials a syntactic-phonological classification task had to be performed before naming. Depending on the outcome of the classifications, a left or a right push-button response was given (go trials), or no push-button response was given (no-go trials). Lateralized readiness potentials (LRPs) were derived to test whether syntactic and phonological information affected the motor system at separate moments in time. The results showed that when syntactic information determined the response-hand decision, an LRP developed on no-go trials. However, no such effect was observed when phonological information determined response hand. On the basis of the data, it can be estimated that an additional period of at least 40 ms is needed to retrieve a word's initial phoneme once its lemma has been retrieved. These results provide evidence for the view that during speaking, grammatical processing precedes phonological processing in time.

Share this page