Publications

Displaying 1 - 100 of 176
  • Armeni, K., Willems, R. M., Van den Bosch, A., & Schoffelen, J.-M. (2019). Frequency-specific brain dynamics related to prediction during language comprehension. NeuroImage, 198, 283-295. doi:10.1016/j.neuroimage.2019.04.083.

    Abstract

    The brain's remarkable capacity to process spoken language virtually in real time requires fast and efficient information processing machinery. In this study, we investigated how frequency-specific brain dynamics relate to models of probabilistic language prediction during auditory narrative comprehension. We recorded MEG activity while participants were listening to auditory stories in Dutch. Using trigram statistical language models, we estimated for every word in a story its conditional probability of occurrence. On the basis of word probabilities, we computed how unexpected the current word is given its context (word perplexity) and how (un)predictable the current linguistic context is (word entropy). We then evaluated whether source-reconstructed MEG oscillations at different frequency bands are modulated as a function of these language processing metrics. We show that theta-band source dynamics are increased in high relative to low entropy states, likely reflecting lexical computations. Beta-band dynamics are increased in situations of low word entropy and perplexity possibly reflecting maintenance of ongoing cognitive context. These findings lend support to the idea that the brain engages in the active generation and evaluation of predicted language based on the statistical properties of the input signal.

    Additional information

    Supplementary data
  • Basnakova, J. (2019). Beyond the language given: The neurobiological infrastructure for pragmatic inferencing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bocanegra, B. R., Poletiek, F. H., Ftitache, B., & Clark, A. (2019). Intelligent problem-solvers externalize cognitive operations. Nature Human Behaviour, 3, 136-142. doi:10.1038/s41562-018-0509-y.

    Abstract

    Humans are nature’s most intelligent and prolific users of external props and aids (such as written texts, slide-rules and software packages). Here we introduce a method for investigating how people make active use of their task environment during problem-solving and apply this approach to the non-verbal Raven Advanced Progressive Matrices test for fluid intelligence. We designed a click-and-drag version of the Raven test in which participants could create different external spatial configurations while solving the puzzles. In our first study, we observed that the click-and-drag test was better than the conventional static test at predicting academic achievement of university students. This pattern of results was partially replicated in a novel sample. Importantly, environment-altering actions were clustered in between periods of apparent inactivity, suggesting that problem-solvers were delicately balancing the execution of internal and external cognitive operations. We observed a systematic relationship between this critical phasic temporal signature and improved test performance. Our approach is widely applicable and offers an opportunity to quantitatively assess a powerful, although understudied, feature of human intelligence: our ability to use external objects, props and aids to solve complex problems.
  • Bosker, H. R., Van Os, M., Does, R., & Van Bergen, G. (2019). Counting 'uhm's: how tracking the distribution of native and non-native disfluencies influences online language comprehension. Journal of Memory and Language, 106, 189-202. doi:10.1016/j.jml.2019.02.006.

    Abstract

    Disfluencies, like 'uh', have been shown to help listeners anticipate reference to low-frequency words. The associative account of this 'disfluency bias' proposes that listeners learn to associate disfluency with low-frequency referents based on prior exposure to non-arbitrary disfluency distributions (i.e., greater probability of low-frequency words after disfluencies). However, there is limited evidence for listeners actually tracking disfluency distributions online. The present experiments are the first to show that adult listeners, exposed to a typical or more atypical disfluency distribution (i.e., hearing a talker unexpectedly say uh before high-frequency words), flexibly adjust their predictive strategies to the disfluency distribution at hand (e.g., learn to predict high-frequency referents after disfluency). However, when listeners were presented with the same atypical disfluency distribution but produced by a non-native speaker, no adjustment was observed. This suggests pragmatic inferences can modulate distributional learning, revealing the flexibility of, and constraints on, distributional learning in incremental language comprehension.
  • Fields, E. C., Weber, K., Stillerman, B., Delaney-Busch, N., & Kuperberg, G. (2019). Functional MRI reveals evidence of a self-positivity bias in the medial prefrontal cortex during the comprehension of social vignettes. Social Cognitive and Affective Neuroscience, 14(6), 613-621. doi:10.1093/scan/nsz035.

    Abstract

    A large literature in social neuroscience has associated the medial prefrontal cortex (mPFC) with the processing of self-related information. However, only recently have social neuroscience studies begun to consider the large behavioral literature showing a strong self-positivity bias, and these studies have mostly focused on its correlates during self-related judgments and decision making. We carried out a functional MRI (fMRI) study to ask whether the mPFC would show effects of the self-positivity bias in a paradigm that probed participants’ self-concept without any requirement of explicit self-judgment. We presented social vignettes that were either self-relevant or non-self-relevant with a neutral, positive, or negative outcome described in the second sentence. In previous work using event-related potentials, this paradigm has shown evidence of a self-positivity bias that influences early stages of semantically processing incoming stimuli. In the present fMRI study, we found evidence for this bias within the mPFC: an interaction between self-relevance and valence, with only positive scenarios showing a self vs other effect within the mPFC. We suggest that the mPFC may play a role in maintaining a positively-biased self-concept and discuss the implications of these findings for the social neuroscience of the self and the role of the mPFC.

    Additional information

    Supplementary data
  • Fitz, H., & Chang, F. (2019). Language ERPs reflect learning through prediction error propagation. Cognitive Psychology, 111, 15-52. doi:10.1016/j.cogpsych.2019.03.002.

    Abstract

    Event-related potentials (ERPs) provide a window into how the brain is processing language. Here, we propose a theory that argues that ERPs such as the N400 and P600 arise as side effects of an error-based learning mechanism that explains linguistic adaptation and language learning. We instantiated this theory in a connectionist model that can simulate data from three studies on the N400 (amplitude modulation by expectancy, contextual constraint, and sentence position), five studies on the P600 (agreement, tense, word category, subcategorization and garden-path sentences), and a study on the semantic P600 in role reversal anomalies. Since ERPs are learning signals, this account explains adaptation of ERP amplitude to within-experiment frequency manipulations and the way ERP effects are shaped by word predictability in earlier sentences. Moreover, it predicts that ERPs can change over language development. The model provides an account of the sensitivity of ERPs to expectation mismatch, the relative timing of the N400 and P600, the semantic nature of the N400, the syntactic nature of the P600, and the fact that ERPs can change with experience. This approach suggests that comprehension ERPs are related to sentence production and language acquisition mechanisms
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2019). Consistency influences altered auditory feedback processing. Quarterly Journal of Experimental Psychology, 72(10), 2371-2379. doi:10.1177/1747021819838939.

    Abstract

    Previous research on the effect of perturbed auditory feedback in speech production has focused on two types of responses. In the short term, speakers generate compensatory motor commands in response to unexpected perturbations. In the longer term, speakers adapt feedforward motor programmes in response to feedback perturbations, to avoid future errors. The current study investigated the relation between these two types of responses to altered auditory feedback. Specifically, it was hypothesised that consistency in previous feedback perturbations would influence whether speakers adapt their feedforward motor programmes. In an altered auditory feedback paradigm, formant perturbations were applied either across all trials (the consistent condition) or only to some trials, whereas the others remained unperturbed (the inconsistent condition). The results showed that speakers’ responses were affected by feedback consistency, with stronger speech changes in the consistent condition compared with the inconsistent condition. Current models of speech-motor control can explain this consistency effect. However, the data also suggest that compensation and adaptation are distinct processes, which are not in line with all current models.
  • Gao, Y., Zheng, L., Liu, X., Nichols, E. S., Zhang, M., Shang, L., Ding, G., Meng, Z., & Liu, L. (2019). First and second language reading difficulty among Chinese–English bilingual children: The prevalence and influences from demographic characteristics. Frontiers in Psychology, 10: 2544. doi:10.3389/fpsyg.2019.02544.

    Abstract

    Learning to read a second language (L2) can pose a great challenge for children who have already been struggling to read in their first language (L1). Moreover, it is not clear whether, to what extent, and under what circumstances L1 reading difficulty increases the risk of L2 reading difficulty. This study investigated Chinese (L1) and English (L2) reading skills in a large representative sample of 1,824 Chinese–English bilingual children in Grades 4 and 5 from both urban and rural schools in Beijing. We examined the prevalence of reading difficulty in Chinese only (poor Chinese readers, PC), English only (poor English readers, PE), and both Chinese and English (poor bilingual readers, PB) and calculated the co-occurrence, that is, the chances of becoming a poor reader in English given that the child was already a poor reader in Chinese. We then conducted a multinomial logistic regression analysis and compared the prevalence of PC, PE, and PB between children in Grade 4 versus Grade 5, in urban versus rural areas, and in boys versus girls. Results showed that compared to girls, boys demonstrated significantly higher risk of PC, PE, and PB. Meanwhile, compared to the 5th graders, the 4th graders demonstrated significantly higher risk of PC and PB. In addition, children enrolled in the urban schools were more likely to become better second language readers, thus leading to a concerning rural–urban gap in the prevalence of L2 reading difficulty. Finally, among these Chinese–English bilingual children, regardless of sex and school location, poor reading skill in Chinese significantly increased the risk of also being a poor English reader, with a considerable and stable co-occurrence of approximately 36%. In sum, this study suggests that despite striking differences between alphabetic and logographic writing systems, L1 reading difficulty still significantly increases the risk of L2 reading difficulty. This indicates the shared meta-linguistic skills in reading different writing systems and the importance of understanding the universality and the interdependent relationship of reading between different writing systems. Furthermore, the male disadvantage (in both L1 and L2) and the urban–rural gap (in L2) found in the prevalence of reading difficulty calls for special attention to disadvantaged populations in educational practice.
  • Gao, X., Dera, J., Nijhoff, A. D., & Willems, R. M. (2019). Is less readable liked better? The case of font readability in poetry appreciation. PLoS One, 14(12): e0225757. doi:10.1371/journal.pone.0225757.

    Abstract

    Previous research shows conflicting findings for the effect of font readability on comprehension and memory for language. It has been found that—perhaps counterintuitively–a hard to read font can be beneficial for language comprehension, especially for difficult language. Here we test how font readability influences the subjective experience of poetry reading. In three experiments we tested the influence of poem difficulty and font readability on the subjective experience of poems. We specifically predicted that font readability would have opposite effects on the subjective experience of easy versus difficult poems. Participants read poems which could be more or less difficult in terms of conceptual or structural aspects, and which were presented in a font that was either easy or more difficult to read. Participants read existing poems and subsequently rated their subjective experience (measured through four dependent variables: overall liking, perceived flow of the poem, perceived topic clarity, and perceived structure). In line with previous literature we observed a Poem Difficulty x Font Readability interaction effect for subjective measures of poetry reading. We found that participants rated easy poems as nicer when presented in an easy to read font, as compared to when presented in a hard to read font. Despite the presence of the interaction effect, we did not observe the predicted opposite effect for more difficult poems. We conclude that font readability can influence reading of easy and more difficult poems differentially, with strongest effects for easy poems.

    Additional information

    https://osf.io/jwcqt/
  • Gehrig, J., Michalareas, G., Forster, M.-T., Lei, J., Hok, P., Laufs, H., Senft, C., Seifert, V., Schoffelen, J.-M., Hanslmayr, H., & Kell, C. A. (2019). Low-frequency oscillations code speech during verbal working memory. The Journal of Neuroscience, 39(33), 6498-6512. doi:10.1523/JNEUROSCI.0018-19.2019.

    Abstract

    The way the human brain represents speech in memory is still unknown. An obvious characteristic of speech is its evolvement over time.
    During speech processing, neural oscillations are modulated by the temporal properties of the acoustic speech signal, but also acquired
    knowledge on the temporal structure of language influences speech perception-related brain activity. This suggests that speech could be
    represented in the temporal domain, a form of representation that the brain also uses to encode autobiographic memories. Empirical
    evidence for such a memory code is lacking. We investigated the nature of speech memory representations using direct cortical recordings
    in the left perisylvian cortex during delayed sentence reproduction in female and male patients undergoing awake tumor surgery.
    Our results reveal that the brain endogenously represents speech in the temporal domain. Temporal pattern similarity analyses revealed
    that the phase of frontotemporal low-frequency oscillations, primarily in the beta range, represents sentence identity in working memory.
    The positive relationship between beta power during working memory and task performance suggests that working memory
    representations benefit from increased phase separation.
  • Hagoort, P. (Ed.). (2019). Human language: From genes and brains to behavior. Cambridge, MA: MIT Press.
  • Hagoort, P., & Beckmann, C. F. (2019). Key issues and future directions: The neural architecture for language. In P. Hagoort (Ed.), Human language: From genes and brains to behavior (pp. 527-532). Cambridge, MA: MIT Press.
  • Hagoort, P. (2019). Introduction. In P. Hagoort (Ed.), Human language: From genes and brains to behavior (pp. 1-6). Cambridge, MA: MIT Press.
  • Hagoort, P. (2019). The meaning making mechanism(s) behind the eyes and between the ears. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20190301. doi:10.1098/rstb.2019.0301.

    Abstract

    In this contribution, the following four questions are discussed: (i) where is meaning?; (ii) what is meaning?; (iii) what is the meaning of mechanism?; (iv) what are the mechanisms of meaning? I will argue that meanings are in the head. Meanings have multiple facets, but minimally one needs to make a distinction between single word meanings (lexical meaning) and the meanings of multi-word utterances. The latter ones cannot be retrieved from memory, but need to be constructed on the fly. A mechanistic account of the meaning-making mind requires an analysis at both a functional and a neural level, the reason being that these levels are causally interdependent. I will show that an analysis exclusively focusing on patterns of brain activation lacks explanatory power. Finally, I shall present an initial sketch of how the dynamic interaction between temporo-parietal areas and inferior frontal cortex might instantiate the interpretation of linguistic utterances in the context of a multimodal setting and ongoing discourse information.
  • Hagoort, P. (2019). The neurobiology of language beyond single word processing. Science, 366(6461), 55-58. doi:10.1126/science.aax0289.

    Abstract

    In this Review, I propose a multiple-network view for the neurobiological basis of distinctly human language skills. A much more complex picture of interacting brain areas emerges than in the classical neurobiological model of language. This is because using language is more than single-word processing, and much goes on beyond the information given in the acoustic or orthographic tokens that enter primary sensory cortices. This requires the involvement of multiple networks with functionally nonoverlapping contributions

    Files private

    Request files
  • Heilbron, M., Ehinger, B., Hagoort, P., & De Lange, F. P. (2019). Tracking naturalistic linguistic predictions with deep neural language models. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience (pp. 424-427). doi:10.32470/CCN.2019.1096-0.

    Abstract

    Prediction in language has traditionally been studied using
    simple designs in which neural responses to expected
    and unexpected words are compared in a categorical
    fashion. However, these designs have been contested
    as being ‘prediction encouraging’, potentially exaggerating
    the importance of prediction in language understanding.
    A few recent studies have begun to address
    these worries by using model-based approaches to probe
    the effects of linguistic predictability in naturalistic stimuli
    (e.g. continuous narrative). However, these studies
    so far only looked at very local forms of prediction, using
    models that take no more than the prior two words into
    account when computing a word’s predictability. Here,
    we extend this approach using a state-of-the-art neural
    language model that can take roughly 500 times longer
    linguistic contexts into account. Predictability estimates
    fromthe neural network offer amuch better fit to EEG data
    from subjects listening to naturalistic narrative than simpler
    models, and reveal strong surprise responses akin to
    the P200 and N400. These results show that predictability
    effects in language are not a side-effect of simple designs,
    and demonstrate the practical use of recent advances
    in AI for the cognitive neuroscience of language.
  • Hervais-Adelman, A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2019). Learning to read recycles visual cortical networks without destruction. Science Advances, 5(9): eaax0262. doi:10.1126/sciadv.aax0262.

    Abstract

    Learning to read is associated with the appearance of an orthographically sensitive brain region known as the visual word form area. It has been claimed that development of this area proceeds by impinging upon territory otherwise available for the processing of culturally relevant stimuli such as faces and houses. In a large-scale functional magnetic resonance imaging study of a group of individuals of varying degrees of literacy (from completely illiterate to highly literate), we examined cortical responses to orthographic and nonorthographic visual stimuli. We found that literacy enhances responses to other visual input in early visual areas and enhances representational similarity between text and faces, without reducing the extent of response to nonorthographic input. Thus, acquisition of literacy in childhood recycles existing object representation mechanisms but without destructive competition.

    Additional information

    aax0262_SM.pdf
  • Heyselaar, E., & Segaert, K. (2019). Memory encoding of syntactic information involves domain-general attentional resources. Evidence from dual-task studies. Quarterly Journal of Experimental Psychology, 72(6), 1285-1296. doi:10.1177/1747021818801249.

    Abstract

    We investigate the type of attention (domain-general or language-specific) used during
    syntactic processing. We focus on syntactic priming: In this task, participants listen to a
    sentence that describes a picture (prime sentence), followed by a picture the participants need
    to describe (target sentence). We measure the proportion of times participants use the
    syntactic structure they heard in the prime sentence to describe the current target sentence as a
    measure of syntactic processing. Participants simultaneously conducted a motion-object
    tracking (MOT) task, a task commonly used to tax domain-general attentional resources. We
    manipulated the number of objects the participant had to track; we thus measured
    participants’ ability to process syntax while their attention is not-, slightly-, or overly-taxed.
    Performance in the MOT task was significantly worse when conducted as a dual-task
    compared to as a single task. We observed an inverted U-shaped curve on priming magnitude
    when conducting the MOT task concurrently with prime sentences (i.e., memory encoding),
    but no effect when conducted with target sentences (i.e., memory retrieval). Our results
    illustrate how, during the encoding of syntactic information, domain-general attention
    differentially affects syntactic processing, whereas during the retrieval of syntactic
    information domain-general attention does not influence syntactic processing
  • Hubbard, R. J., Rommers, J., Jacobs, C. L., & Federmeier, K. D. (2019). Downstream behavioral and electrophysiological consequences of word prediction on recognition memory. Frontiers in Human Neuroscience, 13: 291. doi:10.3389/fnhum.2019.00291.

    Abstract

    When people process language, they can use context to predict upcoming information,
    influencing processing and comprehension as seen in both behavioral and neural
    measures. Although numerous studies have shown immediate facilitative effects
    of confirmed predictions, the downstream consequences of prediction have been
    less explored. In the current study, we examined those consequences by probing
    participants’ recognition memory for words after they read sets of sentences.
    Participants read strongly and weakly constraining sentences with expected or
    unexpected endings (“I added my name to the list/basket”), and later were tested on
    their memory for the sentence endings while EEG was recorded. Critically, the memory
    test contained words that were predictable (“list”) but were never read (participants
    saw “basket”). Behaviorally, participants showed successful discrimination between old
    and new items, but false alarmed to the expected-item lures more often than to new
    items, showing that predicted words or concepts can linger, even when predictions
    are disconfirmed. Although false alarm rates did not differ by constraint, event-related
    potentials (ERPs) differed between false alarms to strongly and weakly predictable words.
    Additionally, previously unexpected (compared to previously expected) endings that
    appeared on the memory test elicited larger N1 and LPC amplitudes, suggesting greater
    attention and episodic recollection. In contrast, highly predictable sentence endings that
    had been read elicited reduced LPC amplitudes during the memory test. Thus, prediction
    can facilitate processing in the moment, but can also lead to false memory and reduced
    recollection for predictable information.
  • Hulten, A., Schoffelen, J.-M., Udden, J., Lam, N. H. L., & Hagoort, P. (2019). How the brain makes sense beyond the processing of single words – An MEG study. NeuroImage, 186, 586-594. doi:10.1016/j.neuroimage.2018.11.035.

    Abstract

    Human language processing involves combinatorial operations that make human communication stand out in the animal kingdom. These operations rely on a dynamic interplay between the inferior frontal and the posterior temporal cortices. Using source reconstructed magnetoencephalography, we tracked language processing in the brain, in order to investigate how individual words are interpreted when part of sentence context. The large sample size in this study (n = 68) allowed us to assess how event-related activity is associated across distinct cortical areas, by means of inter-areal co-modulation within an individual. We showed that, within 500 ms of seeing a word, the word's lexical information has been retrieved and unified with the sentence context. This does not happen in a strictly feed-forward manner, but by means of co-modulation between the left posterior temporal cortex (LPTC) and left inferior frontal cortex (LIFC), for each individual word. The co-modulation of LIFC and LPTC occurs around 400 ms after the onset of each word, across the progression of a sentence. Moreover, these core language areas are supported early on by the attentional network. The results provide a detailed description of the temporal orchestration related to single word processing in the context of ongoing language.

    Additional information

    1-s2.0-S1053811918321165-mmc1.pdf
  • De Kleijn, R., Wijnen, M., & Poletiek, F. H. (2019). The effect of context-dependent information and sentence constructions on perceived humanness of an agent in a Turing test. Knowledge-Based Systems, 163, 794-799. doi:10.1016/j.knosys.2018.10.006.

    Abstract

    In a Turing test, a judge decides whether their conversation partner is either a machine or human. What cues does the judge use to determine this? In particular, are presumably unique features of human language actually perceived as humanlike? Participants rated the humanness of a set of sentences that were manipulated for grammatical construction: linear right-branching or hierarchical center-embedded and their plausibility with regard to world knowledge.

    We found that center-embedded sentences are perceived as less humanlike than right-branching sentences and more plausible sentences are regarded as more humanlike. However, the effect of plausibility of the sentence on perceived humanness is smaller for center-embedded sentences than for right-branching sentences.

    Participants also rated a conversation with either correct or incorrect use of the context by the agent. No effect of context use was found. Also, participants rated a full transcript of either a real human or a real chatbot, and we found that chatbots were reliably perceived as less humanlike than real humans, in line with our expectation. We did, however, find individual differences between chatbots and humans.
  • Kochari, A., & Flecken, M. (2019). Lexical prediction in language comprehension: A replication study of grammatical gender effects in Dutch. Language, Cognition and Neuroscience, 34(2), 239-253. doi:10.1080/23273798.2018.1524500.

    Abstract

    An important question in predictive language processing is the extent to which prediction effects can reliably be measured on pre-nominal material (e.g. articles before nouns). Here, we present a large sample (N = 58) close replication of a study by Otten and van Berkum (2009). They report ERP modulations in relation to the predictability of nouns in sentences, measured on gender-marked Dutch articles. We used nearly identical materials, procedures, and data analysis steps. We fail to replicate the original effect, but do observe a pattern consistent with the original data. Methodological differences between our replication and the original study that could potentially have contributed to the diverging results are discussed. In addition, we discuss the suitability of Dutch gender-marked determiners as a test-case for future studies of pre-activation of lexical items.
  • Kuperberg, G., Weber, K., Delaney-Busch, N., Ustine, C., Stillerman, B., Hämäläinen, M., & Lau, E. (2019). Multimodal neuroimaging evidence for looser lexico-semantic connections in schizophrenia. Neuropsychologia, 124, 337-349. doi:10.1016/j.neuropsychologia.2018.10.024.

    Abstract

    It has been hypothesized that schizophrenia is characterized by overly broad automatic activity within lexico-semantic networks. We used two complementary neuroimaging techniques, Magnetoencephalography (MEG) and functional Magnetic Resonance Imaging (fMRI), in combination with a highly automatic indirect semantic priming paradigm, to spatiotemporally localize this abnormality in the brain.

    Eighteen people with schizophrenia and 20 demographically-matched control participants viewed target words (“bell”) preceded by directly related (“church”), indirectly related (“priest”), or unrelated (“pants”) prime words in MEG and fMRI sessions. To minimize top-down processing, the prime was masked, the target appeared only 140ms after prime onset, and participants simply monitored for words within a particular semantic category that appeared in filler trials.

    Both techniques revealed a significantly larger automatic indirect priming effect in people with schizophrenia than in control participants. MEG temporally localized this enhanced effect to the N400 time window (300-500ms) — the critical stage of accessing meaning from words. fMRI spatially localized the effect to the left temporal fusiform cortex, which plays a role in mapping of orthographic word-form on to meaning. There was no evidence of an enhanced automatic direct semantic priming effect in the schizophrenia group.

    These findings provide converging neural evidence for abnormally broad highly automatic lexico-semantic activity in schizophrenia. We argue that, rather than arising from an unconstrained spread of automatic activation across semantic memory, this broader automatic lexico-semantic activity stems from looser connections between the form and meaning of words.

    Additional information

    1-s2.0-S0028393218307310-mmc1.docx
  • Mak, M., & Willems, R. M. (2019). Mental simulation during literary reading: Individual differences revealed with eye-tracking. Language, Cognition and Neuroscience, 34(4), 511-535. doi:10.1080/23273798.2018.1552007.

    Abstract

    People engage in simulation when reading literary narratives. In this study, we tried to pinpoint how different kinds of simulation (perceptual and motor simulation, mentalising) affect reading behaviour. Eye-tracking (gaze durations, regression probability) and questionnaire data were collected from 102 participants, who read three literary short stories. In a pre-test, 90 additional participants indicated which parts of the stories were high in one of the three kinds of simulation-eliciting content. The results show that motor simulation reduces gaze duration (faster reading), whereas perceptual simulation and mentalising increase gaze duration (slower reading). Individual differences in the effect of simulation on gaze duration were found, which were related to individual differences in aspects of story world absorption and story appreciation. These findings suggest fundamental differences between different kinds of simulation and confirm the role of simulation in absorption and appreciation.
  • Mantegna, F., Hintz, F., Ostarek, M., Alday, P. M., & Huettig, F. (2019). Distinguishing integration and prediction accounts of ERP N400 modulations in language processing through experimental design. Neuropsychologia, 134: 107199. doi:10.1016/j.neuropsychologia.2019.107199.

    Abstract

    Prediction of upcoming input is thought to be a main characteristic of language processing (e.g. Altmann & Mirkovic, 2009; Dell & Chang, 2014; Federmeier, 2007; Ferreira & Chantavarin, 2018; Pickering & Gambi, 2018; Hale, 2001; Hickok, 2012; Huettig 2015; Kuperberg & Jaeger, 2016; Levy, 2008; Norris, McQueen, & Cutler, 2016; Pickering & Garrod, 2013; Van Petten & Luka, 2012). One of the main pillars of experimental support for this notion comes from studies that have attempted to measure electrophysiological markers of prediction when participants read or listened to sentences ending in highly predictable words. The N400, a negative-going and centro-parietally distributed component of the ERP occurring approximately 400ms after (target) word onset, has been frequently interpreted as indexing prediction of the word (or the semantic representations and/or the phonological form of the predicted word, see Kutas & Federmeier, 2011; Nieuwland, 2019; Van Petten & Luka, 2012; for review). A major difficulty for interpreting N400 effects in language processing however is that it has been difficult to establish whether N400 target word modulations conclusively reflect prediction rather than (at least partly) ease of integration. In the present exploratory study, we attempted to distinguish lexical prediction (i.e. ‘top-down’ activation) from lexical integration (i.e. ‘bottom-up’ activation) accounts of ERP N400 modulations in language processing.
  • Martinez-Conde, S., Alexander, R. G., Blum, D., Britton, N., Lipska, B. K., Quirk, G. J., Swiss, J. I., Willems, R. M., & Macknik, S. L. (2019). The storytelling brain: How neuroscience stories help bridge the gap between research and society. The Journal of Neuroscience, 39(42), 8285-8290. doi:10.1523/JNEUROSCI.1180-19.2019.

    Abstract

    Active communication between researchers and society is necessary for the scientific community’s involvement in developing sciencebased
    policies. This need is recognized by governmental and funding agencies that compel scientists to increase their public engagement
    and disseminate research findings in an accessible fashion. Storytelling techniques can help convey science by engaging people’s imagination
    and emotions. Yet, many researchers are uncertain about how to approach scientific storytelling, or feel they lack the tools to
    undertake it. Here we explore some of the techniques intrinsic to crafting scientific narratives, as well as the reasons why scientific
    storytellingmaybe an optimal way of communicating research to nonspecialists.Wealso point out current communication gaps between
    science and society, particularly in the context of neurodiverse audiences and those that include neurological and psychiatric patients.
    Present shortcomings may turn into areas of synergy with the potential to link neuroscience education, research, and advocacy
  • Misersky, J., Majid, A., & Snijders, T. M. (2019). Grammatical gender in German influences how role-nouns are interpreted: Evidence from ERPs. Discourse Processes, 56(8), 643-654. doi:10.1080/0163853X.2018.1541382.

    Abstract

    Grammatically masculine role-nouns (e.g., Studenten-masc.‘students’) can refer to men and women, but may favor an interpretation where only men are considered the referent. If true, this has implications for a society aiming to achieve equal representation in the workplace since, for example, job adverts use such role descriptions. To investigate the interpretation of role-nouns, the present ERP study assessed grammatical gender processing in German. Twenty participants read sentences where a role-noun (masculine or feminine) introduced a group of people, followed by a congruent (masculine–men, feminine–women) or incongruent (masculine–women, feminine–men) continuation. Both for feminine-men and masculine-women continuations a P600 (500 to 800 ms) was observed; another positivity was already present from 300 to 500 ms for feminine-men continuations, but critically not for masculine-women continuations. The results imply a male-biased rather than gender-neutral interpretation of the masculine—despite widespread usage of the masculine as a gender-neutral form—suggesting masculine forms are inadequate for representing genders equally.
  • Mongelli, V., Meijs, E. L., Van Gaal, S., & Hagoort, P. (2019). No language unification without neural feedback: How awareness affects sentence processing. Neuroimage, 202: 116063. doi:10.1016/j.neuroimage.2019.116063.

    Abstract

    How does the human brain combine a finite number of words to form an infinite variety of sentences? According to the Memory, Unification and Control (MUC) model, sentence processing requires long-range feedback from the left inferior frontal cortex (LIFC) to left posterior temporal cortex (LPTC). Single word processing however may only require feedforward propagation of semantic information from sensory regions to LPTC. Here we tested the claim that long-range feedback is required for sentence processing by reducing visual awareness of words using a masking technique. Masking disrupts feedback processing while leaving feedforward processing relatively intact. Previous studies have shown that masked single words still elicit an N400 ERP effect, a neural signature of semantic incongruency. However, whether multiple words can be combined to form a sentence under reduced levels of awareness is controversial. To investigate this issue, we performed two experiments in which we measured electroencephalography (EEG) while 40 subjects performed a masked priming task. Words were presented either successively or simultaneously, thereby forming a short sentence that could be congruent or incongruent with a target picture. This sentence condition was compared with a typical single word condition. In the masked condition we only found an N400 effect for single words, whereas in the unmasked condition we observed an N400 effect for both unmasked sentences and single words. Our findings suggest that long-range feedback processing is required for sentence processing, but not for single word processing.
  • Nieuwland, M. S., Coopmans, C. W., & Sommers, R. P. (2019). Distinguishing old from new referents during discourse comprehension: Evidence from ERPs and oscillations. Frontiers in Human Neuroscience, 13: 398. doi:10.3389/fnhum.2019.00398.

    Abstract

    In this EEG study, we used pre-registered and exploratory ERP and time-frequency analyses to investigate the resolution of anaphoric and non-anaphoric noun phrases during discourse comprehension. Participants listened to story contexts that described two antecedents, and subsequently read a target sentence with a critical noun phrase that lexically matched one antecedent (‘old’), matched two antecedents (‘ambiguous’), partially matched one antecedent in terms of semantic features (‘partial-match’), or introduced another referent (non-anaphoric, ‘new’). After each target sentence, participants judged whether the noun referred back to an antecedent (i.e., an ‘old/new’ judgment), which was easiest for ambiguous nouns and hardest for partially matching nouns. The noun-elicited N400 ERP component demonstrated initial sensitivity to repetition and semantic overlap, corresponding to repetition and semantic priming effects, respectively. New and partially matching nouns both elicited a subsequent frontal positivity, which suggested that partially matching anaphors may have been processed as new nouns temporarily. ERPs in an even later time window and ERPs time-locked to sentence-final words suggested that new and partially matching nouns had different effects on comprehension, with partially matching nouns incurring additional processing costs up to the end of the sentence. In contrast to the ERP results, the time-frequency results primarily demonstrated sensitivity to noun repetition, and did not differentiate partially matching anaphors from new nouns. In sum, our results show the ERP and time-frequency effects of referent repetition during discourse comprehension, and demonstrate the potentially demanding nature of establishing the anaphoric meaning of a novel noun.
  • Nieuwland, M. S. (2019). Do ‘early’ brain responses reveal word form prediction during language comprehension? A critical review. Neuroscience and Biobehavioral Reviews, 96, 367-400. doi:10.1016/j.neubiorev.2018.11.019.

    Abstract

    Current theories of language comprehension posit that readers and listeners routinely try to predict the meaning but also the visual or sound form of upcoming words. Whereas
    most neuroimaging studies on word rediction focus on the N400 ERP or its magnetic equivalent, various studies claim that word form prediction manifests itself in ‘early’, pre
    N400 brain responses (e.g., ELAN, M100, P130, N1, P2, N200/PMN, N250). Modulations of these components are often taken as evidence that word form prediction impacts early sensory processes (the sensory hypothesis) or, alternatively, the initial stages of word recognition before word meaning is integrated with sentence context (the recognition hypothesis). Here, I
    comprehensively review studies on sentence- or discourse-level language comprehension that report such effects of prediction on early brain responses. I conclude that the reported evidence for the sensory hypothesis or word recognition hypothesis is weak and inconsistent,
    and highlight the urgent need for replication of previous findings. I discuss the implications and challenges to current theories of linguistic prediction and suggest avenues for future research.
  • Ostarek, M., Van Paridon, J., & Montero-Melis, G. (2019). Sighted people’s language is not helpful for blind individuals’ acquisition of typical animal colors. Proceedings of the National Academy of Sciences of the United States of America, 116(44), 21972-21973. doi:10.1073/pnas.1912302116.
  • Peeters, D., Vanlangendonck, F., Rüschemeyer, S.-A., & Dijkstra, T. (2019). Activation of the language control network in bilingual visual word recognition. Cortex, 111, 63-73. doi:10.1016/j.cortex.2018.10.012.

    Abstract

    Research into bilingual language production has identified a language control network that subserves control operations when bilinguals produce speech. Here we explore which brain areas are recruited for control purposes in bilingual language comprehension. In two experimental fMRI sessions, Dutch-English unbalanced bilinguals read words that differed in cross-linguistic form and meaning overlap across their two languages. The need for control operations was further manipulated by varying stimulus list composition across the two experimental sessions. We observed activation of the language control network in bilingual language comprehension as a function of both cross-linguistic form and meaning overlap and stimulus list composition. These findings suggest that the language control network is shared across bilingual language production and comprehension. We argue that activation of the language control network in language comprehension allows bilinguals to quickly and efficiently grasp the context-relevant meaning of words.

    Additional information

    1-s2.0-S0010945218303459-mmc1.docx
  • Peeters, D. (2019). Virtual reality: A game-changing method for the language sciences. Psychonomic Bulletin & Review, 26(3), 894-900. doi:10.3758/s13423-019-01571-3.

    Abstract

    This paper introduces virtual reality as an experimental method for the language sciences and provides a review of recent studies using the method to answer fundamental, psycholinguistic research questions. It is argued that virtual reality demonstrates that ecological validity and
    experimental control should not be conceived of as two extremes on a continuum, but rather as two orthogonal factors. Benefits of using virtual reality as an experimental method include that in a virtual environment, as in the real world, there is no artificial spatial divide between participant and stimulus. Moreover, virtual reality experiments do not necessarily have to include a repetitive trial structure or an unnatural experimental task. Virtual agents outperform experimental confederates in terms of the consistency and replicability of their behaviour, allowing for reproducible science across participants and research labs. The main promise of virtual reality as a tool for the experimental language sciences, however, is that it shifts theoretical focus towards the interplay between different modalities (e.g., speech, gesture, eye gaze, facial expressions) in dynamic and communicative real-world environments, complementing studies that focus on one modality (e.g. speech) in isolation.
  • Preisig, B., Sjerps, M. J., Kösem, A., & Riecke, L. (2019). Dual-site high-density 4Hz transcranial alternating current stimulation applied over auditory and motor cortical speech areas does not influence auditory-motor mapping. Brain Stimulation, 12(3), 775-777. doi:10.1016/j.brs.2019.01.007.
  • Preisig, B., & Sjerps, M. J. (2019). Hemispheric specializations affect interhemispheric speech sound integration during duplex perception. The Journal of the Acoustical Society of America, 145, EL190-EL196. doi:10.1121/1.5092829.

    Abstract

    The present study investigated whether speech-related spectral information benefits from initially predominant right or left hemisphere processing. Normal hearing individuals categorized speech sounds composed of an ambiguous base (perceptually intermediate between /ga/ and /da/), presented to one ear, and a disambiguating low or high F3 chirp presented to the other ear. Shorter response times were found when the chirp was presented to the left ear than to the right ear (inducing initially right-hemisphere chirp processing), but no between-ear differences in strength of overall integration. The results are in line with the assumptions of a right hemispheric dominance for spectral processing.

    Additional information

    Supplementary material
  • Sakarias, M., & Flecken, M. (2019). Keeping the result in sight and mind: General cognitive principles and language-specific influences in the perception and memory of resultative events. Cognitive Science, 43(1), 1-30. doi:10.1111/cogs.12708.

    Abstract

    We study how people attend to and memorize endings of events that differ in the degree to which objects in them are affected by an action: Resultative events show objects that undergo a visually salient change in state during the course of the event (peeling a potato), and non‐resultative events involve objects that undergo no, or only partial state change (stirring in a pan). We investigate general cognitive principles, and potential language‐specific influences, in verbal and nonverbal event encoding and memory, across two experiments with Dutch and Estonian participants. Estonian marks a viewer's perspective on an event's result obligatorily via grammatical case on direct object nouns: Objects undergoing a partial/full change in state in an event are marked with partitive/accusative case, respectively. Therefore, we hypothesized increased saliency of object states and event results in Estonian speakers, as compared to speakers of Dutch. Findings show (a) a general cognitive principle of attending carefully to endings of resultative events, implying cognitive saliency of object states in event processing; (b) a language‐specific boost on attention and memory of event results under verbal task demands in Estonian speakers. Results are discussed in relation to theories of event cognition, linguistic relativity, and thinking for speaking.
  • Schoffelen, J.-M., Oostenveld, R., Lam, N. H. L., Udden, J., Hulten, A., & Hagoort, P. (2019). A 204-subject multimodal neuroimaging dataset to study language processing. Scientific Data, 6(1): 17. doi:10.1038/s41597-019-0020-y.

    Abstract

    This dataset, colloquially known as the Mother Of Unification Studies (MOUS) dataset, contains multimodal neuroimaging data that has been acquired from 204 healthy human subjects. The neuroimaging protocol consisted of magnetic resonance imaging (MRI) to derive information at high spatial resolution about brain anatomy and structural connections, and functional data during task, and at rest. In addition, magnetoencephalography (MEG) was used to obtain high temporal resolution electrophysiological measurements during task, and at rest. All subjects performed a language task, during which they processed linguistic utterances that either consisted of normal or scrambled sentences. Half of the subjects were reading the stimuli, the other half listened to the stimuli. The resting state measurements consisted of 5 minutes eyes-open for the MEG and 7 minutes eyes-closed for fMRI. The neuroimaging data, as well as the information about the experimental events are shared according to the Brain Imaging Data Structure (BIDS) format. This unprecedented neuroimaging language data collection allows for the investigation of various aspects of the neurobiological correlates of language.
  • Schoot, L., Hagoort, P., & Segaert, K. (2019). Stronger syntactic alignment in the presence of an interlocutor. Frontiers in Psychology, 10: 685. doi:10.3389/fpsyg.2019.00685.

    Abstract

    Speakers are influenced by the linguistic context: hearing one syntactic alternative leads to an increased chance that the speaker will repeat this structure in the subsequent utterance (i.e., syntactic priming, or structural persistence). Top-down influences, such as whether a conversation partner (or, interlocutor) is present, may modulate the degree to which syntactic priming occurs. In the current study, we indeed show that the magnitude of syntactic alignment increases when speakers are interacting with an interlocutor as opposed to doing the experiment alone. The structural persistence effect for passive sentences is stronger in the presence of an interlocutor than when no interlocutor is present (i.e., when the participant is primed by a recording). We did not find evidence, however, that a speaker’s syntactic priming magnitude is influenced by the degree of their conversation partner’s priming magnitude. Together, these results support a mediated account of syntactic priming, in which syntactic choices are not only affected by preceding linguistic input, but also by top-down influences, such as the speakers’ communicative intent.
  • Schubotz, L., Ozyurek, A., & Holler, J. (2019). Age-related differences in multimodal recipient design: Younger, but not older adults, adapt speech and co-speech gestures to common ground. Language, Cognition and Neuroscience, 34(2), 254-271. doi:10.1080/23273798.2018.1527377.

    Abstract

    Speakers can adapt their speech and co-speech gestures based on knowledge shared with an addressee (common ground-based recipient design). Here, we investigate whether these adaptations are modulated by the speaker’s age and cognitive abilities. Younger and older participants narrated six short comic stories to a same-aged addressee. Half of each story was known to both participants, the other half only to the speaker. The two age groups did not differ in terms of the number of words and narrative events mentioned per narration, or in terms of gesture frequency, gesture rate, or percentage of events expressed multimodally. However, only the younger participants reduced the amount of verbal and gestural information when narrating mutually known as opposed to novel story content. Age-related differences in cognitive abilities did not predict these differences in common ground-based recipient design. The older participants’ communicative behaviour may therefore also reflect differences in social or pragmatic goals.

    Additional information

    plcp_a_1527377_sm4510.pdf
  • Seymour, R. A., Rippon, G., Goordin-Williams, G., Schoffelen, J.-M., & Kessler, K. (2019). Dysregulated oscillatory connectivity in thevisual system in autism spectrum disorder. Brain, 142(10), 3294-3305. doi:10.1093/brain/awz214.

    Abstract

    Autism spectrum disorder is increasingly associated with atypical perceptual and sensory symptoms. Here we explore the hypothesis
    that aberrant sensory processing in autism spectrum disorder could be linked to atypical intra- (local) and interregional (global)
    brain connectivity. To elucidate oscillatory dynamics and connectivity in the visual domain we used magnetoencephalography and
    a simple visual grating paradigm with a group of 18 adolescent autistic participants and 18 typically developing control subjects.
    Both groups showed similar increases in gamma (40–80 Hz) and decreases in alpha (8–13 Hz) frequency power in occipital cortex.
    However, systematic group differences emerged when analysing intra- and interregional connectivity in detail. First, directed
    connectivity was estimated using non-parametric Granger causality between visual areas V1 and V4. Feedforward V1-to-V4
    connectivity, mediated by gamma oscillations, was equivalent between autism spectrum disorder and control groups, but importantly,
    feedback V4-to-V1 connectivity, mediated by alpha (8–13 Hz) oscillations, was significantly reduced in the autism spectrum
    disorder group. This reduction was positively correlated with autistic quotient scores, consistent with an atypical visual hierarchy
    in autism, characterized by reduced top-down modulation of visual input via alpha-band oscillations. Second, at the local level in
    V1, coupling of alpha-phase to gamma amplitude (alpha-gamma phase amplitude coupling) was reduced in the autism spectrum
    disorder group. This implies dysregulated local visual processing, with gamma oscillations decoupled from patterns of wider alphaband
    phase synchrony (i.e. reduced phase amplitude coupling), possibly due to an excitation-inhibition imbalance. More generally,
    these results are in agreement with predictive coding accounts of neurotypical perception and indicate that visual processes in
    autism are less modulated by contextual feedback information.
  • Sharoh, D., Van Mourik, T., Bains, L. J., Segaert, K., Weber, K., Hagoort, P., & Norris, D. (2019). Laminar specific fMRI reveals directed interactions in distributed networks during language processing. Proceedings of the National Academy of Sciences of the United States of America, 116(42), 21185-21190. doi:10.1073/pnas.1907858116.

    Abstract

    Interactions between top-down and bottom-up information streams are integral to brain function but challenging to measure noninvasively. Laminar resolution, functional MRI (lfMRI) is sensitive to depth-dependent properties of the blood oxygen level-dependent (BOLD) response, which can be potentially related to top-down and bottom-up signal contributions. In this work, we used lfMRI to dissociate the top-down and bottom-up signal contributions to the left occipitotemporal sulcus (LOTS) during word reading. We further demonstrate that laminar resolution measurements could be used to identify condition-specific distributed networks on the basis of whole-brain connectivity patterns specific to the depth-dependent BOLD signal. The networks corresponded to top-down and bottom-up signal pathways targeting the LOTS during word reading. We show that reading increased the top-down BOLD signal observed in the deep layers of the LOTS and that this signal uniquely related to the BOLD response in other language-critical regions. These results demonstrate that lfMRI can reveal important patterns of activation that are obscured at standard resolution. In addition to differences in activation strength as a function of depth, we also show meaningful differences in the interaction between signals originating from different depths both within a region and with the rest of the brain. We thus show that lfMRI allows the noninvasive measurement of directed interaction between brain regions and is capable of resolving different connectivity patterns at submillimeter resolution, something previously considered to be exclusively in the domain of invasive recordings.
  • Sjerps, M. J., & Chang, E. F. (2019). The cortical processing of speech sounds in the temporal lobe. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 361-379). Cambridge, MA: MIT Press.
  • Sjerps, M. J., Fox, N. P., Johnson, K., & Chang, E. F. (2019). Speaker-normalized sound representations in the human auditory cortex. Nature Communications, 10: 2465. doi:10.1038/s41467-019-10365-z.

    Abstract

    The acoustic dimensions that distinguish speech sounds (like the vowel differences in “boot” and “boat”) also differentiate speakers’ voices. Therefore, listeners must normalize across speakers without losing linguistic information. Past behavioral work suggests an important role for auditory contrast enhancement in normalization: preceding context affects listeners’ perception of subsequent speech sounds. Here, using intracranial electrocorticography in humans, we investigate whether and how such context effects arise in auditory cortex. Participants identified speech sounds that were preceded by phrases from two different speakers whose voices differed along the same acoustic dimension as target words (the lowest resonance of the vocal tract). In every participant, target vowels evoke a speaker-dependent neural response that is consistent with the listener’s perception, and which follows from a contrast enhancement model. Auditory cortex processing thus displays a critical feature of normalization, allowing listeners to extract meaningful content from the voices of diverse speakers.

    Additional information

    41467_2019_10365_MOESM1_ESM.pdf
  • De Swart, P., & Van Bergen, G. (2019). How animacy and verbal information influence V2 sentence processing: Evidence from eye movements. Open Linguistics, 5(1), 630-649. doi:10.1515/opli-2019-0035.

    Abstract

    There exists a clear association between animacy and the grammatical function of transitive subject. The grammar of some languages require the transitive subject to be high in animacy, or at least higher than the object. A similar animacy preference has been observed in processing studies in languages without such a categorical animacy effect. This animacy preference has been mainly established in structures in which either one or both arguments are provided before the verb. Our goal was to establish (i) whether this preference can already be observed before any argument is provided, and (ii) whether this preference is mediated by verbal information. To this end we exploited the V2 property of Dutch which allows the verb to precede its arguments. Using a visual-world eye-tracking paradigm we presented participants with V2 structures with either an auxiliary (e.g. Gisteren heeft X … ‘Yesterday, X has …’) or a lexical main verb (e.g. Gisteren motiveerde X … ‘Yesterday, X motivated …’) and we measured looks to the animate referent. The results indicate that the animacy preference can already be observed before arguments are presented and that the selectional restrictions of the verb mediate this bias, but do not override it completely.
  • Takashima, A., Bakker-Marshall, I., Van Hell, J. G., McQueen, J. M., & Janzen, G. (2019). Neural correlates of word learning in children. Developmental Cognitive Neuroscience, 37: 100647. doi:10.1016/j.dcn.2019.100649.

    Abstract

    Memory representations of words are thought to undergo changes with consolidation: Episodic memories of novel words are transformed into lexical representations that interact with other words in the mental dictionary. Behavioral studies have shown that this lexical integration process is enhanced when there is more time for consolidation. Neuroimaging studies have further revealed that novel word representations are initially represented in a hippocampally-centered system, whereas left posterior middle temporal cortex activation increases with lexicalization. In this study, we measured behavioral and brain responses to newly-learned words in children. Two groups of Dutch children, aged between 8-10 and 14-16 years, were trained on 30 novel Japanese words depicting novel concepts. Children were tested on word-forms, word-meanings, and the novel words’ influence on existing word processing immediately after training, and again after a week. In line with the adult findings, hippocampal involvement decreased with time. Lexical integration, however, was not observed immediately or after a week, neither behaviorally nor neurally. It appears that time alone is not always sufficient for lexical integration to occur. We suggest that other factors (e.g., the novelty of the concepts and familiarity with the language the words are derived from) might also influence the integration process.

    Additional information

    Supplementary data
  • Takashima, A., & Verhoeven, L. (2019). Radical repetition effects in beginning learners of Chinese as a foreign language reading. Journal of Neurolinguistics, 50, 71-81. doi:10.1016/j.jneuroling.2018.03.001.

    Abstract

    The aim of the present study was to examine whether repetition of radicals during training of Chinese characters leads to better word acquisition performance in beginning learners of Chinese as a foreign language. Thirty Dutch university students were trained on 36 Chinese one-character words for their pronunciations and meanings. They were also exposed to the specifics of the radicals, that is, for phonetic radicals, the associated pronunciation was explained, and for semantic radicals the associated categorical meanings were explained. Results showed that repeated exposure to phonetic and semantic radicals through character pronunciation and meaning trainings indeed induced better understanding of those radicals that were shared among different characters. Furthermore, characters in the training set that shared phonetic radicals were pronounced better than those that did not. Repetition of semantic radicals across different characters, however, hindered the learning of exact meanings. Students generally confused the meanings of other characters that shared the semantic radical. The study shows that in the initial stage of learning, overlapping information of the shared radicals are effectively learned. Acquisition of the specifics of individual characters, however, requires more training.

    Additional information

    Supplementary data
  • Udden, J., Hulten, A., Bendt, K., Mineroff, Z., Kucera, K. S., Vino, A., Fedorenko, E., Hagoort, P., & Fisher, S. E. (2019). Towards robust functional neuroimaging genetics of cognition. Journal of Neuroscience, 39(44), 8778-8787. doi:10.1523/JNEUROSCI.0888-19.2019.

    Abstract

    A commonly held assumption in cognitive neuroscience is that, because measures of human brain function are closer to underlying biology than distal indices of behavior/cognition, they hold more promise for uncovering genetic pathways. Supporting this view is an influential fMRI-based study of sentence reading/listening by Pinel et al. (2012), who reported that common DNA variants in specific candidate genes were associated with altered neural activation in language-related regions of healthy individuals that carried them. In particular, different single-nucleotide polymorphisms (SNPs) of FOXP2 correlated with variation in task-based activation in left inferior frontal and precentral gyri, whereas a SNP at the KIAA0319/TTRAP/THEM2 locus was associated with variable functional asymmetry of the superior temporal sulcus. Here, we directly test each claim using a closely matched neuroimaging genetics approach in independent cohorts comprising 427 participants, four times larger than the original study of 94 participants. Despite demonstrating power to detect associations with substantially smaller effect sizes than those of the original report, we do not replicate any of the reported associations. Moreover, formal Bayesian analyses reveal substantial to strong evidence in support of the null hypothesis (no effect). We highlight key aspects of the original investigation, common to functional neuroimaging genetics studies, which could have yielded elevated false-positive rates. Genetic accounts of individual differences in cognitive functional neuroimaging are likely to be as complex as behavioral/cognitive tests, involving many common genetic variants, each of tiny effect. Reliable identification of true biological signals requires large sample sizes, power calculations, and validation in independent cohorts with equivalent paradigms.

    SIGNIFICANCE STATEMENT A pervasive idea in neuroscience is that neuroimaging-based measures of brain function, being closer to underlying neurobiology, are more amenable for uncovering links to genetics. This is a core assumption of prominent studies that associate common DNA variants with altered activations in task-based fMRI, despite using samples (10–100 people) that lack power for detecting the tiny effect sizes typical of genetically complex traits. Here, we test central findings from one of the most influential prior studies. Using matching paradigms and substantially larger samples, coupled to power calculations and formal Bayesian statistics, our data strongly refute the original findings. We demonstrate that neuroimaging genetics with task-based fMRI should be subject to the same rigorous standards as studies of other complex traits.
  • Van den Broek, G. S. E., Segers, E., Van Rijn, H., Takashima, A., & Verhoeven, L. (2019). Effects of elaborate feedback during practice tests: Costs and benefits of retrieval prompts. Journal of Experimental Psychology: Applied, 25(4), 588-601. doi:10.1037/xap0000212.

    Abstract

    This study explores the effect of feedback with hints on students’ recall of words. In three classroom experiments, high school students individually practiced vocabulary words through computerized retrieval practice with either standard show-answer feedback (display of answer) or hints feedback after incorrect responses. Hints feedback gave students a second chance to find the correct response using orthographic (Experiment 1), mnemonic (Experiment 2), or cross-language hints (Experiment 3). During practice, hints led to a shift of practice time from further repetitions to longer feedback processing but did not reduce (repeated) errors. There was no effect of feedback on later recall except when the hints from practice were also available on the test, indicating limited transfer of practice with hints to later recall without hints (in Experiments 1 and 2). Overall, hints feedback was not preferable over show-answer feedback. The common notion that hints are beneficial may not hold when the total practice time is limited.
  • Van Berkum, J. J. A., & Nieuwland, M. S. (2019). A cognitive neuroscience perspective on language comprehension in context. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 429-442). Cambridge, MA: MIT Press.
  • Van Bergen, G., Flecken, M., & Wu, R. (2019). Rapid target selection of object categories based on verbs: Implications for language-categorization interactions. Psychophysiology, 56(9): e13395. doi:10.1111/psyp.13395.

    Abstract

    Although much is known about how nouns facilitate object categorization, very little is known about how verbs (e.g., posture verbs such as stand or lie) facilitate object categorization. Native Dutch speakers are a unique population to investigate this issue with because the configurational categories distinguished by staan (to stand) and liggen (to lie) are inherent in everyday Dutch language. Using an ERP component (N2pc), four experiments demonstrate that selection of posture verb categories is rapid (between 220–320 ms). The effect was attenuated, though present, when removing the perceptual distinction between categories. A similar attenuated effect was obtained in native English speakers, where the category distinction is less familiar, and when category labels were implicit for native Dutch speakers. Our results are among the first to demonstrate that category search based on verbs can be rapid, although extensive linguistic experience and explicit labels may not be necessary to facilitate categorization in this case.

    Additional information

    psyp13395-sup-0001-appendixs1.pdf
  • Van Es, M. W. J., & Schoffelen, J.-M. (2019). Stimulus-induced gamma power predicts the amplitude of the subsequent visual evoked response. NeuroImage, 186, 703-712. doi:10.1016/j.neuroimage.2018.11.029.

    Abstract

    The efficiency of neuronal information transfer in activated brain networks may affect behavioral performance.
    Gamma-band synchronization has been proposed to be a mechanism that facilitates neuronal processing of
    behaviorally relevant stimuli. In line with this, it has been shown that strong gamma-band activity in visual
    cortical areas leads to faster responses to a visual go cue. We investigated whether there are directly observable
    consequences of trial-by-trial fluctuations in non-invasively observed gamma-band activity on the neuronal
    response. Specifically, we hypothesized that the amplitude of the visual evoked response to a go cue can be
    predicted by gamma power in the visual system, in the window preceding the evoked response. Thirty-three
    human subjects (22 female) performed a visual speeded response task while their magnetoencephalogram
    (MEG) was recorded. The participants had to respond to a pattern reversal of a concentric moving grating. We
    estimated single trial stimulus-induced visual cortical gamma power, and correlated this with the estimated single
    trial amplitude of the most prominent event-related field (ERF) peak within the first 100 ms after the pattern
    reversal. In parieto-occipital cortical areas, the amplitude of the ERF correlated positively with gamma power, and
    correlated negatively with reaction times. No effects were observed for the alpha and beta frequency bands,
    despite clear stimulus onset induced modulation at those frequencies. These results support a mechanistic model,
    in which gamma-band synchronization enhances the neuronal gain to relevant visual input, thus leading to more
    efficient downstream processing and to faster responses.
  • Varma, S., Takashima, A., Fu, L., & Kessels, R. P. C. (2019). Mindwandering propensity modulates episodic memory consolidation. Aging Clinical and Experimental Research, 31(11), 1601-1607. doi:10.1007/s40520-019-01251-1.

    Abstract

    Research into strategies that can combat episodic memory decline in healthy older adults has gained widespread attention over the years. Evidence suggests that a short period of rest immediately after learning can enhance memory consolidation, as compared to engaging in cognitive tasks. However, a recent study in younger adults has shown that post-encoding engagement in a working memory task leads to the same degree of memory consolidation as from post-encoding rest. Here, we tested whether this finding can be extended to older adults. Using a delayed recognition test, we compared the memory consolidation of word–picture pairs learned prior to 9 min of rest or a 2-Back working memory task, and examined its relationship with executive functioning and mindwandering propensity. Our results show that (1) similar to younger adults, memory for the word–picture associations did not differ when encoding was followed by post-encoding rest or 2-Back task and (2) older adults with higher mindwandering propensity retained more word–picture associations encoded prior to rest relative to those encoded prior to the 2-Back task, whereas participants with lower mindwandering propensity had better memory performance for the pairs encoded prior to the 2-Back task. Overall, our results indicate that the degree of episodic memory consolidation during both active and passive post-encoding periods depends on individual mindwandering tendency.

    Additional information

    Supplementary material
  • Warren, C. M., Tona, K. D., Ouwekerk, L., Van Paridon, J., Poletiek, F. H., Bosch, J. A., & Nieuwenhuis, S. (2019). The neuromodulatory and hormonal effects of transcutaneous vagus nerve stimulation as evidenced by salivary alpha amylase, salivary cortisol, pupil diameter, and the P3 event-related potential. Brain Stimulation, 12(3), 635-642. doi:10.1016/j.brs.2018.12.224.

    Abstract

    Background

    Transcutaneous vagus nerve stimulation (tVNS) is a new, non-invasive technique being investigated as an intervention for a variety of clinical disorders, including epilepsy and depression. It is thought to exert its therapeutic effect by increasing central norepinephrine (NE) activity, but the evidence supporting this notion is limited.
    Objective

    In order to test for an impact of tVNS on psychophysiological and hormonal indices of noradrenergic function, we applied tVNS in concert with assessment of salivary alpha amylase (SAA) and cortisol, pupil size, and electroencephalograph (EEG) recordings.
    Methods

    Across three experiments, we applied real and sham tVNS to 61 healthy participants while they performed a set of simple stimulus-discrimination tasks. Before and after the task, as well as during one break, participants provided saliva samples and had their pupil size recorded. EEG was recorded throughout the task. The target for tVNS was the cymba conchae, which is heavily innervated by the auricular branch of the vagus nerve. Sham stimulation was applied to the ear lobe.
    Results

    P3 amplitude was not affected by tVNS (Experiment 1A: N=24; Experiment 1B: N=20; Bayes factor supporting null model=4.53), nor was pupil size (Experiment 2: N=16; interaction of treatment and time: p=0.79). However, tVNS increased SAA (Experiments 1A and 2: N=25) and attenuated the decline of salivary cortisol compared to sham (Experiment 2: N=17), as indicated by significant interactions involving treatment and time (p=.023 and p=.040, respectively).
    Conclusion

    These findings suggest that tVNS modulates hormonal indices but not psychophysiological indices of noradrenergic function.
  • Weber, K., Christiansen, M., Indefrey, P., & Hagoort, P. (2019). Primed from the start: Syntactic priming during the first days of language learning. Language Learning, 69(1), 198-221. doi:10.1111/lang.12327.

    Abstract

    New linguistic information must be integrated into our existing language system. Using a novel experimental task that incorporates a syntactic priming paradigm into artificial language learning, we investigated how new grammatical regularities and words are learned. This innovation allowed us to control the language input the learner received, while the syntactic priming paradigm provided insight into the nature of the underlying syntactic processing machinery. The results of the present study pointed to facilitatory syntactic processing effects within the first days of learning: Syntactic and lexical priming effects revealed participants’ sensitivity to both novel words and word orders. This suggested that novel syntactic structures and their meaning (form–function mapping) can be acquired rapidly through incidental learning. More generally, our study indicated similar mechanisms for learning and processing in both artificial and natural languages, with implications for the relationship between first and second language learning.
  • Weber, K., Micheli, C., Ruigendijk, E., & Rieger, J. (2019). Sentence processing is modulated by the current linguistic environment and a priori information: An fMRI study. Brain and Behavior, 9(7): e01308. doi:10.1002/brb3.1308.

    Abstract

    Introduction
    Words are not processed in isolation but in rich contexts that are used to modulate and facilitate language comprehension. Here, we investigate distinct neural networks underlying two types of contexts, the current linguistic environment and verb‐based syntactic preferences.

    Methods
    We had two main manipulations. The first was the current linguistic environment, where the relative frequencies of two syntactic structures (prepositional object [PO] and double‐object [DO]) would either follow everyday linguistic experience or not. The second concerned the preference toward one or the other structure depending on the verb; learned in everyday language use and stored in memory. German participants were reading PO and DO sentences in German while brain activity was measured with functional magnetic resonance imaging.

    Results
    First, the anterior cingulate cortex (ACC) showed a pattern of activation that integrated the current linguistic environment with everyday linguistic experience. When the input did not match everyday experience, the unexpected frequent structure showed higher activation in the ACC than the other conditions and more connectivity from the ACC to posterior parts of the language network. Second, verb‐based surprisal of seeing a structure given a verb (PO verb preference but DO structure presentation) resulted, within the language network (left inferior frontal and left middle/superior temporal gyrus) and the precuneus, in increased activation compared to a predictable verb‐structure pairing.

    Conclusion
    In conclusion, (1) beyond the canonical language network, brain areas engaged in prediction and error signaling, such as the ACC, might use the statistics of syntactic structures to modulate language processing, (2) the language network is directly engaged in processing verb preferences. These two networks show distinct influences on sentence processing.

    Additional information

    Supporting information
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Zuidema, W., & Fitz, H. (2019). Key issues and future directions: Models of human language and speech processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 353-358). Cambridge, MA: MIT Press.
  • Aparicio, X., Heidlmayr, K., & Isel, F. (2017). Inhibition efficiency in highly proficient bilinguals and simultaneous interpreters: Evidence from language switching and stroop tasks. Journal of Psycholinguistic Research, 46, 1427-1451. doi:10.1007/s10936-017-9501-3.

    Abstract

    The present behavioral study aimed to examine the impact of language control expertise on two domain-general control processes, i.e. active inhibition of competing representations and overcoming of inhibition. We compared how Simultaneous Interpreters (SI) and Highly Proficient Bilinguals—two groups assumed to differ in language control capacity—performed executive tasks involving specific inhibition processes. In Experiment 1 (language decision task), both active and overcoming of inhibition processes are involved, while in Experiment 2 (bilingual Stroop task) only interference suppression is supposed to be required. The results of Experiment 1 showed a language switching effect only for the highly proficient bilinguals, potentially because overcoming of inhibition requires more cognitive resources than in SI. Nevertheless, both groups performed similarly on the Stroop task in Experiment 2, which suggests that active inhibition may work similarly in both groups. These contrasting results suggest that overcoming of inhibition may be harder to master than active inhibition. Taken together, these data indicate that some executive control processes may be less sensitive to the degree of expertise in bilingual language control than others. Our findings lend support to psycholinguistic models of bilingualism postulating a higher-order mechanism regulating language activation.
  • Armeni, K., Willems, R. M., & Frank, S. (2017). Probabilistic language models in cognitive neuroscience: Promises and pitfalls. Neuroscience and Biobehavioral Reviews, 83, 579-588. doi:10.1016/j.neubiorev.2017.09.001.

    Abstract

    Cognitive neuroscientists of language comprehension study how neural computations relate to cognitive computations during comprehension. On the cognitive part of the equation, it is important that the computations and processing complexity are explicitly defined. Probabilistic language models can be used to give a computationally explicit account of language complexity during comprehension. Whereas such models have so far predominantly been evaluated against behavioral data, only recently have the models been used to explain neurobiological signals. Measures obtained from these models emphasize the probabilistic, information-processing view of language understanding and provide a set of tools that can be used for testing neural hypotheses about language comprehension. Here, we provide a cursory review of the theoretical foundations and example neuroimaging studies employing probabilistic language models. We highlight the advantages and potential pitfalls of this approach and indicate avenues for future research
  • De Boer, M., Kokal, I., Blokpoel, M., Liu, R., Stolk, A., Roelofs, K., Van Rooij, I., & Toni, I. (2017). Oxytocin modulates human communication by enhancing cognitive exploration. Psychoneuroendocrinology, 86, 64-72. doi:10.1016/j.psyneuen.2017.09.010.

    Abstract

    Oxytocin is a neuropeptide known to influence how humans share material resources. Here we explore whether oxytocin influences how we share knowledge. We focus on two distinguishing features of human communication, namely the ability to select communicative signals that disambiguate the many-to-many mappings that exist between a signal’s form and meaning, and adjustments of those signals to the presumed cognitive characteristics of the addressee (“audience design”). Fifty-five males participated in a randomized, double-blind, placebo controlled experiment involving the intranasal administration of oxytocin. The participants produced novel non-verbal communicative signals towards two different addressees, an adult or a child, in an experimentally-controlled live interactive setting. We found that oxytocin administration drives participants to generate signals of higher referential quality, i.e. signals that disambiguate more communicative problems; and to rapidly adjust those communicative signals to what the addressee understands. The combined effects of oxytocin on referential quality and audience design fit with the notion that oxytocin administration leads participants to explore more pervasively behaviors that can convey their intention, and diverse models of the addressees. These findings suggest that, besides affecting prosocial drive and salience of social cues, oxytocin influences how we share knowledge by promoting cognitive exploration
  • Bosker, H. R., & Kösem, A. (2017). An entrained rhythm's frequency, not phase, influences temporal sampling of speech. In Proceedings of Interspeech 2017 (pp. 2416-2420). doi:10.21437/Interspeech.2017-73.

    Abstract

    Brain oscillations have been shown to track the slow amplitude fluctuations in speech during comprehension. Moreover, there is evidence that these stimulus-induced cortical rhythms may persist even after the driving stimulus has ceased. However, how exactly this neural entrainment shapes speech perception remains debated. This behavioral study investigated whether and how the frequency and phase of an entrained rhythm would influence the temporal sampling of subsequent speech. In two behavioral experiments, participants were presented with slow and fast isochronous tone sequences, followed by Dutch target words ambiguous between as /ɑs/ “ash” (with a short vowel) and aas /a:s/ “bait” (with a long vowel). Target words were presented at various phases of the entrained rhythm. Both experiments revealed effects of the frequency of the tone sequence on target word perception: fast sequences biased listeners to more long /a:s/ responses. However, no evidence for phase effects could be discerned. These findings show that an entrained rhythm’s frequency, but not phase, influences the temporal sampling of subsequent speech. These outcomes are compatible with theories suggesting that sensory timing is evaluated relative to entrained frequency. Furthermore, they suggest that phase tracking of (syllabic) rhythms by theta oscillations plays a limited role in speech parsing.
  • Bouhali, F., Mongelli, V., & Cohen, L. (2017). Musical literacy shifts asymmetries in the ventral visual cortex. NeuroImage, 156, 445-455. doi:10.1016/j.neuroimage.2017.04.027.

    Abstract

    The acquisition of literacy has a profound impact on the functional specialization and lateralization of the visual cortex. Due to the overall lateralization of the language network, specialization for printed words develops in the left occipitotemporal cortex, allegedly inducing a secondary shift of visual face processing to the right, in literate as compared to illiterate subjects. Applying the same logic to the acquisition of high-level musical literacy, we predicted that, in musicians as compared to non-musicians, occipitotemporal activations should show a leftward shift for music reading, and an additional rightward push for face perception. To test these predictions, professional musicians and non-musicians viewed pictures of musical notation, faces, words, tools and houses in the MRI, and laterality was assessed in the ventral stream combining ROI and voxel-based approaches. The results supported both predictions, and allowed to locate the leftward shift to the inferior temporal gyrus and the rightward shift to the fusiform cortex. Moreover, these laterality shifts generalized to categories other than music and faces. Finally, correlation measures across subjects did not support a causal link between the leftward and rightward shifts. Thus the acquisition of an additional perceptual expertise extensively modifies the laterality pattern in the visual system

    Additional information

    1-s2.0-S1053811917303208-mmc1.docx

    Files private

    Request files
  • Bulut, T., Hung, Y., Tzeng, O., & Wu, D. (2017). Neural correlates of processing sentences and compound words in Chinese. PLOS ONE, 12(12): e0188526. doi:10.1371/journal.pone.0188526.
  • Coco, M. I., Araujo, S., & Petersson, K. M. (2017). Disentangling stimulus plausibility and contextual congruency: Electro-physiological evidence for differential cognitive dynamics. Neuropsychologia, 96, 150-163. doi:10.1016/j.neuropsychologia.2016.12.008.

    Abstract

    Expectancy mechanisms are routinely used by the cognitive system in stimulus processing and in anticipation of appropriate responses. Electrophysiology research has documented negative shifts of brain activity when expectancies are violated within a local stimulus context (e.g., reading an implausible word in a sentence) or more globally between consecutive stimuli (e.g., a narrative of images with an incongruent end). In this EEG study, we examine the interaction between expectancies operating at the level of stimulus plausibility and at more global level of contextual congruency to provide evidence for, or against, a disassociation of the underlying processing mechanisms. We asked participants to verify the congruency of pairs of cross-modal stimuli (a sentence and a scene), which varied in plausibility. ANOVAs on ERP amplitudes in selected windows of interest show that congruency violation has longer-lasting (from 100 to 500 ms) and more widespread effects than plausibility violation (from 200 to 400 ms). We also observed critical interactions between these factors, whereby incongruent and implausible pairs elicited stronger negative shifts than their congruent counterpart, both early on (100–200 ms) and between 400–500 ms. Our results suggest that the integration mechanisms are sensitive to both global and local effects of expectancy in a modality independent manner. Overall, we provide novel insights into the interdependence of expectancy during meaning integration of cross-modal stimuli in a verification task
  • Dai, B., McQueen, J. M., Hagoort, P., & Kösem, A. (2017). Pure linguistic interference during comprehension of competing speech signals. The Journal of the Acoustical Society of America, 141, EL249-EL254. doi:10.1121/1.4977590.

    Abstract

    Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at −3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in‐speech comprehension
  • Fitz, H., & Chang, F. (2017). Meaningful questions: The acquisition of auxiliary inversion in a connectionist model of sentence production. Cognition, 166, 225-250. doi:10.1016/j.cognition.2017.05.008.

    Abstract

    Nativist theories have argued that language involves syntactic principles which are unlearnable from the input children receive. A paradigm case of these innate principles is the structure dependence of auxiliary inversion in complex polar questions (Chomsky, 1968, 1975, 1980). Computational approaches have focused on the properties of the input in explaining how children acquire these questions. In contrast, we argue that messages are structured in a way that supports structure dependence in syntax. We demonstrate this approach within a connectionist model of sentence production (Chang, 2009) which learned to generate a range of complex polar questions from a structured message without positive exemplars in the input. The model also generated different types of error in development that were similar in magnitude to those in children (e.g., auxiliary doubling, Ambridge, Rowland, & Pine, 2008; Crain & Nakayama, 1987). Through model comparisons we trace how meaning constraints and linguistic experience interact during the acquisition of auxiliary inversion. Our results suggest that auxiliary inversion rules in English can be acquired without innate syntactic principles, as long as it is assumed that speakers who ask complex questions express messages that are structured into multiple propositions
  • Frank, S. L., & Willems, R. M. (2017). Word predictability and semantic similarity show distinct patterns of brain activity during language comprehension. Language, Cognition and Neuroscience, 32(9), 1192-1203. doi:10.1080/23273798.2017.1323109.

    Abstract

    We investigate the effects of two types of relationship between the words of a sentence or text – predictability and semantic similarity – by reanalysing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data from studies in which participants comprehend naturalistic stimuli. Each content word's predictability given previous words is quantified by a probabilistic language model, and semantic similarity to previous words is quantified by a distributional semantics model. Brain activity time-locked to each word is regressed on the two model-derived measures. Results show that predictability and semantic similarity have near identical N400 effects but are dissociated in the fMRI data, with word predictability related to activity in, among others, the visual word-form area, and semantic similarity related to activity in areas associated with the semantic network. This indicates that both predictability and similarity play a role during natural language comprehension and modulate distinct cortical regions.
  • Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. In Proceedings of Interspeech 2017 (pp. 655-658). doi:10.21437/Interspeech.2017-122.

    Abstract

    One of the most daunting tasks of a listener is to map a
    continuous auditory stream onto known speech sound
    categories and lexical items. A major issue with this mapping
    problem is the variability in the acoustic realizations of sound
    categories, both within and across speakers. Past research has
    suggested listeners may use visual information (e.g., lipreading)
    to calibrate these speech categories to the current
    speaker. Previous studies have focused on audiovisual
    recalibration of consonant categories. The present study
    explores whether vowel categorization, which is known to show
    less sharply defined category boundaries, also benefit from
    visual cues.
    Participants were exposed to videos of a speaker
    pronouncing one out of two vowels, paired with audio that was
    ambiguous between the two vowels. After exposure, it was
    found that participants had recalibrated their vowel categories.
    In addition, individual variability in audiovisual recalibration is
    discussed. It is suggested that listeners’ category sharpness may
    be related to the weight they assign to visual information in
    audiovisual speech perception. Specifically, listeners with less
    sharp categories assign more weight to visual information
    during audiovisual speech recognition.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.

    Abstract

    An important part of understanding speech motor control consists of capturing the
    interaction between speech production and speech perception. This study tests a
    prediction of theoretical frameworks that have tried to account for these interactions: if
    speech production targets are specified in auditory terms, individuals with better
    auditory acuity should have more precise speech targets, evidenced by decreased
    within-phoneme variability and increased between-phoneme distance. A study was
    carried out consisting of perception and production tasks in counterbalanced order.
    Auditory acuity was assessed using an adaptive speech discrimination task, while
    production variability was determined using a pseudo-word reading task. Analyses of
    the production data were carried out to quantify average within-phoneme variability as
    well as average between-phoneme contrasts. Results show that individuals not only
    vary in their production and perceptual abilities, but that better discriminators have
    more distinctive vowel production targets (that is, targets with less within-phoneme
    variability and greater between-phoneme distances), confirming the initial hypothesis.
    This association between speech production and perception did not depend on local
    phoneme density in vowel space. This study suggests that better auditory acuity leads
    to more precise speech production targets, which may be a consequence of auditory
    feedback affecting speech production over time.
  • Grabot, L., Kösem, A., Azizi, L., & Van Wassenhove, V. (2017). Prestimulus Alpha Oscillations and the Temporal Sequencing of Audio-visual Events. Journal of Cognitive Neuroscience, 29(9), 1566-1582. doi:10.1162/jocn_a_01145.

    Abstract

    Perceiving the temporal order of sensory events typically depends on participants' attentional state, thus likely on the endogenous fluctuations of brain activity. Using magnetoencephalography, we sought to determine whether spontaneous brain oscillations could disambiguate the perceived order of auditory and visual events presented in close temporal proximity, that is, at the individual's perceptual order threshold (Point of Subjective Simultaneity [PSS]). Two neural responses were found to index an individual's temporal order perception when contrasting brain activity as a function of perceived order (i.e., perceiving the sound first vs. perceiving the visual event first) given the same physical audiovisual sequence. First, average differences in prestimulus auditory alpha power indicated perceiving the correct ordering of audiovisual events irrespective of which sensory modality came first: a relatively low alpha power indicated perceiving auditory or visual first as a function of the actual sequence order. Additionally, the relative changes in the amplitude of the auditory (but not visual) evoked responses were correlated with participant's correct performance. Crucially, the sign of the magnitude difference in prestimulus alpha power and evoked responses between perceived audiovisual orders correlated with an individual's PSS. Taken together, our results suggest that spontaneous oscillatory activity cannot disambiguate subjective temporal order without prior knowledge of the individual's bias toward perceiving one or the other sensory modality first. Altogether, our results suggest that, under high perceptual uncertainty, the magnitude of prestimulus alpha (de)synchronization indicates the amount of compensation needed to overcome an individual's prior in the serial ordering and temporal sequencing of information
  • Hagoort, P. (2017). It is the facts, stupid. In J. Brockman, F. Van der Wa, & H. Corver (Eds.), Wetenschappelijke parels: het belangrijkste wetenschappelijke nieuws volgens 193 'briljante geesten'. Amsterdam: Maven Press.
  • Hagoort, P. (2017). Don't forget neurobiology: An experimental approach to linguistic representation. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e292. doi:10.1017/S0140525X17000401.

    Abstract

    Acceptability judgments are no longer acceptable as the holy grail for testing the nature of linguistic representations. Experimental and quantitative methods should be used to test theoretical claims in psycholinguistics. These methods should include not only behavior, but also the more recent possibilities to probe the neural codes for language-relevant representation
  • Hagoort, P. (2017). The core and beyond in the language-ready brain. Neuroscience and Biobehavioral Reviews, 81, 194-204. doi:10.1016/j.neubiorev.2017.01.048.

    Abstract

    In this paper a general cognitive architecture of spoken language processing is specified. This is followed by an account of how this cognitive architecture is instantiated in the human brain. Both the spatial aspects of the networks for language are discussed, as well as the temporal dynamics and the underlying neurophysiology. A distinction is proposed between networks for coding/decoding linguistic information and additional networks for getting from coded meaning to speaker meaning, i.e. for making the inferences that enable the listener to understand the intentions of the speaker
  • Hagoort, P. (2017). The neural basis for primary and acquired language skills. In E. Segers, & P. Van den Broek (Eds.), Developmental Perspectives in Written Language and Literacy: In honor of Ludo Verhoeven (pp. 17-28). Amsterdam: Benjamins. doi:10.1075/z.206.02hag.

    Abstract

    Reading is a cultural invention that needs to recruit cortical infrastructure that was not designed for it (cultural recycling of cortical maps). In the case of reading both visual cortex and networks for speech processing are recruited. Here I discuss current views on the neurobiological underpinnings of spoken language that deviate in a number of ways from the classical Wernicke-Lichtheim-Geschwind model. More areas than Broca’s and Wernicke’s region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication. The consequences of this architecture for reading are discussed.
  • Hartung, F. (2017). Getting under your skin: The role of perspective and simulation of experience in narrative comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Hartung, F., Hagoort, P., & Willems, R. M. (2017). Readers select a comprehension mode independent of pronoun: Evidence from fMRI during narrative comprehension. Brain and Language, 170, 29-38. doi:10.1016/j.bandl.2017.03.007.

    Abstract

    Perspective is a crucial feature for communicating about events. Yet it is unclear how linguistically encoded perspective relates to cognitive perspective taking. Here, we tested the effect of perspective taking with short literary stories. Participants listened to stories with 1st or 3rd person pronouns referring to the protagonist, while undergoing fMRI. When comparing action events with 1st and 3rd person pronouns, we found no evidence for a neural dissociation depending on the pronoun. A split sample approach based on the self-reported experience of perspective taking revealed 3 comprehension preferences. One group showed a strong 1st person preference, another a strong 3rd person preference, while a third group engaged in 1st and 3rd person perspective taking simultaneously. Comparing brain activations of the groups revealed different neural networks. Our results suggest that comprehension is perspective dependent, but not on the perspective suggested by the text, but on the reader’s (situational) preference
  • Hartung, F., Withers, P., Hagoort, P., & Willems, R. M. (2017). When fiction is just as real as fact: No differences in reading behavior between stories believed to be based on true or fictional events. Frontiers in Psychology, 8: 1618. doi:10.3389/fpsyg.2017.01618.

    Abstract

    Experiments have shown that compared to fictional texts, readers read factual texts faster and have better memory for described situations. Reading fictional texts on the other hand seems to improve memory for exact wordings and expressions. Most of these studies used a ‘newspaper’ versus ‘literature’ comparison. In the present study, we investigated the effect of reader’s expectation to whether information is true or fictional with a subtler manipulation by labelling short stories as either based on true or fictional events. In addition, we tested whether narrative perspective or individual preference in perspective taking affects reading true or fictional stories differently. In an online experiment, participants (final N=1742) read one story which was introduced as based on true events or as fictional (factor fictionality). The story could be narrated in either 1st or 3rd person perspective (factor perspective). We measured immersion in and appreciation of the story, perspective taking, as well as memory for events. We found no evidence that knowing a story is fictional or based on true events influences reading behavior or experiential aspects of reading. We suggest that it is not whether a story is true or fictional, but rather expectations towards certain reading situations (e.g. reading newspaper or literature) which affect behavior by activating appropriate reading goals. Results further confirm that narrative perspective partially influences perspective taking and experiential aspects of reading
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). How social opinion influences syntactic processing – An investigation using virtual reality. PLoS One, 12(4): e0174405. doi:10.1371/journal.pone.0174405.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). In dialogue with an avatar, language behavior is identical to dialogue with a human partner. Behavior Research Methods, 49(1), 46-60. doi:10.3758/s13428-015-0688-7.

    Abstract

    The use of virtual reality (VR) as a methodological tool is becoming increasingly popular in behavioral research as its flexibility allows for a wide range of applications. This new method has not been as widely accepted in the field of psycholinguistics, however, possibly due to the assumption that language processing during human-computer interactions does not accurately reflect human-human interactions. Yet at the same time there is a growing need to study human-human language interactions in a tightly controlled context, which has not been possible using existing methods. VR, however, offers experimental control over parameters that cannot be (as finely) controlled in the real world. As such, in this study we aim to show that human-computer language interaction is comparable to human-human language interaction in virtual reality. In the current study we compare participants’ language behavior in a syntactic priming task with human versus computer partners: we used a human partner, a human-like avatar with human-like facial expressions and verbal behavior, and a computer-like avatar which had this humanness removed. As predicted, our study shows comparable priming effects between the human and human-like avatar suggesting that participants attributed human-like agency to the human-like avatar. Indeed, when interacting with the computer-like avatar, the priming effect was significantly decreased. This suggests that when interacting with a human-like avatar, sentence processing is comparable to interacting with a human partner. Our study therefore shows that VR is a valid platform for conducting language research and studying dialogue interactions in an ecologically valid manner.
  • Heyselaar, E. (2017). Influences on the magnitude of syntactic priming. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Heyselaar, E., Segaert, K., Walvoort, S. J., Kessels, R. P., & Hagoort, P. (2017). The role of nondeclarative memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Neuropsychologia, 101, 97-105. doi:10.1016/j.neuropsychologia.2017.04.033.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing patients with Korsakoff's amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education, and premorbid intelligence. Patients with Korsakoff's syndrome display deficits in all subdomains of declarative memory, yet their nondeclarative memory remains intact, making them an ideal patient group to determine which memory system supports syntactic priming. In line with the hypothesis that syntactic priming relies on nondeclarative memory, the patient group shows strong priming tendencies (12.6% passive structure repetition). Our healthy control group did not show a priming tendency, presumably due to cognitive interference between declarative and nondeclarative memory. We discuss the results in relation to amnesia, aging, and compensatory mechanisms.
  • Hirschmann, J., Schoffelen, J.-M., Schnitzler, A., & Van Gerven, M. A. J. (2017). Parkinsonian rest tremor can be detected accurately based on neuronal oscillations recorded from the subthalamic nucleus. Clinical Neurophysiology, 128, 2029-2036. doi:10.1016/j.clinph.2017.07.419.

    Abstract

    Objective: To investigate the possibility of tremor detection based on deep brain activity.
    Methods: We re-analyzed recordings of local field potentials (LFPs) from the subthalamic nucleus in 10
    PD patients (12 body sides) with spontaneously fluctuating rest tremor. Power in several frequency bands
    was estimated and used as input to Hidden Markov Models (HMMs) which classified short data segments
    as either tremor-free rest or rest tremor. HMMs were compared to direct threshold application to individual
    power features.
    Results: Applying a threshold directly to band-limited power was insufficient for tremor detection (mean
    area under the curve [AUC] of receiver operating characteristic: 0.64, STD: 0.19). Multi-feature HMMs, in
    contrast, allowed for accurate detection (mean AUC: 0.82, STD: 0.15), using four power features obtained
    from a single contact pair. Within-patient training yielded better accuracy than across-patient training
    (0.84 vs. 0.78, p = 0.03), yet tremor could often be detected accurately with either approach. High frequency
    oscillations (>200 Hz) were the best performing individual feature.
    Conclusions: LFP-based markers of tremor are robust enough to allow for accurate tremor detection in
    short data segments, provided that appropriate statistical models are used.
    Significance: LFP-based markers of tremor could be useful control signals for closed-loop deep brain
    stimulation.
  • Ito, A., Martin, A. E., & Nieuwland, M. S. (2017). How robust are prediction effects in language comprehension? Failure to replicate article-elicited N400 effects. Language, Cognition and Neuroscience, 32, 954-965. doi:10.1080/23273798.2016.1242761.

    Abstract

    Current psycholinguistic theory proffers prediction as a central, explanatory mechanism in language
    processing. However, widely-replicated prediction effects may not mean that prediction is
    necessary in language processing. As a case in point, C. D. Martin et al. [2013. Bilinguals reading
    in their second language do not predict upcoming words as native readers do.
    Journal of
    Memory and Language, 69
    (4), 574

    588. doi:10.1016/j.jml.2013.08.001] reported ERP evidence for
    prediction in native- but not in non-native speakers. Articles mismatching an expected noun
    elicited larger negativity in the N400 time window compared to articles matching the expected
    noun in native speakers only. We attempted to replicate these findings, but found no evidence
    for prediction irrespective of language nativeness. We argue that pre-activation of phonological
    form of upcoming nouns, as evidenced in article-elicited effects, may not be a robust
    phenomenon. A view of prediction as a necessary computation in language comprehension
    must be re-evaluated.
  • Ito, A., Martin, A. E., & Nieuwland, M. S. (2017). Why the A/AN prediction effect may be hard to replicate: A rebuttal to DeLong, Urbach & Kutas (2017). Language, Cognition and Neuroscience, 32(8), 974-983. doi:10.1080/23273798.2017.1323112.
  • Kita, S., Alibali, M. W., & Chu, M. (2017). How Do Gestures Influence Thinking and Speaking? The Gesture-for-Conceptualization Hypothesis. Psychological Review, 124(3), 245-266. doi:10.1037/rev0000059.

    Abstract

    People spontaneously produce gestures during speaking and thinking. The authors focus here on gestures that depict or indicate information related to the contents of concurrent speech or thought (i.e., representational gestures). Previous research indicates that such gestures have not only communicative functions, but also self-oriented cognitive functions. In this article, the authors propose a new theoretical framework, the gesture-for-conceptualization hypothesis, which explains the self-oriented functions of representational gestures. According to this framework, representational gestures affect cognitive processes in 4 main ways: gestures activate, manipulate, package, and explore spatio-motoric information for speaking and thinking. These four functions are shaped by gesture's ability to schematize information, that is, to focus on a small subset of available information that is potentially relevant to the task at hand. The framework is based on the assumption that gestures are generated from the same system that generates practical actions, such as object manipulation; however, gestures are distinct from practical actions in that they represent information. The framework provides a novel, parsimonious, and comprehensive account of the self-oriented functions of gestures. The authors discuss how the framework accounts for gestures that depict abstract or metaphoric content, and they consider implications for the relations between self-oriented and communicative functions of gestures
  • Kösem, A., & Van Wassenhove, V. (2017). Distinct contributions of low and high frequency neural oscillations to speech comprehension. Language, Cognition and Neuroscience, 32(5), 536-544. doi:10.1080/23273798.2016.1238495.

    Abstract

    In the last decade, the involvement of neural oscillatory mechanisms in speech comprehension has been increasingly investigated. Current evidence suggests that low-frequency and high-frequency neural entrainment to the acoustic dynamics of speech are linked to its analysis. One crucial question is whether acoustical processing primarily modulates neural entrainment, or whether entrainment instead reflects linguistic processing. Here, we review studies investigating the effect of linguistic manipulations on neural oscillatory activity. In light of the current findings, we argue that theta (3–8 Hz) entrainment may primarily reflect the analysis of the acoustic features of speech. In contrast, recent evidence suggests that delta (1–3 Hz) and high-frequency activity (>40 Hz) are reliable indicators of perceived linguistic representations. The interdependence between low-frequency and high-frequency neural oscillations, as well as their causal role on speech comprehension, is further discussed with regard to neurophysiological models of speech processing
  • Kunert, R., & Jongman, S. R. (2017). Entrainment to an auditory signal: Is attention involved? Journal of Experimental Psychology: General, 146(1), 77-88. doi:10.1037/xge0000246.

    Abstract

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment.
  • Kunert, R. (2017). Music and language comprehension in the brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lam, N. H. L. (2017). Comprehending comprehension: Insights from neuronal oscillations on the neuronal basis of language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lam, K. J. Y., Bastiaansen, M. C. M., Dijkstra, T., & Rueschemeyer, S. A. (2017). Making sense: motor activation and action plausibility during sentence processing. Language, Cognition and Neuroscience, 32(5), 590-600. doi:10.1080/23273798.2016.1164323.

    Abstract

    The current electroencephalography study investigated the relationship between the motor and (language) comprehension systems by simultaneously measuring mu and N400 effects. Specifically, we examined whether the pattern of motor activation elicited by verbs depends on the larger sentential context. A robust N400 congruence effect confirmed the contextual manipulation of action plausibility, a form of semantic congruency. Importantly, this study showed that: (1) Action verbs elicited more mu power decrease than non-action verbs when sentences described plausible actions. Action verbs thus elicited more motor activation than non-action verbs. (2) In contrast, when sentences described implausible actions, mu activity was present but the difference between the verb types was not observed. The increased processing associated with a larger N400 thus coincided with mu activity in sentences describing implausible actions. Altogether, context-dependent motor activation appears to play a functional role in deriving context-sensitive meaning
  • Lewis, A. G., Schoffelen, J.-M., Hoffmann, C., Bastiaansen, M. C. M., & Schriefers, H. (2017). Discourse-level semantic coherence influences beta oscillatory dynamics and the N400 during sentence comprehension. Language, Cognition and Neuroscience, 32(5), 601-617. doi:10.1080/23273798.2016.1211300.

    Abstract

    In this study, we used electroencephalography to investigate the influence of discourse-level semantic coherence on electrophysiological signatures of local sentence-level processing. Participants read groups of four sentences that could either form coherent stories or were semantically unrelated. For semantically coherent discourses compared to incoherent ones, the N400 was smaller at sentences 2–4, while the visual N1 was larger at the third and fourth sentences. Oscillatory activity in the beta frequency range (13–21 Hz) was higher for coherent discourses. We relate the N400 effect to a disruption of local sentence-level semantic processing when sentences are unrelated. Our beta findings can be tentatively related to disruption of local sentence-level syntactic processing, but it cannot be fully ruled out that they are instead (or also) related to disrupted local sentence-level semantic processing. We conclude that manipulating discourse-level semantic coherence does have an effect on oscillatory power related to local sentence-level processing.
  • Lewis, A. G. (2017). Explorations of beta-band neural oscillations during language comprehension: Sentence processing and beyond. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lockwood, G. (2017). Talking sense: The behavioural and neural correlates of sound symbolism. PhD Thesis, Radboud University, Nijmegen.
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., & Willems, R. M. (2017). Using stochastic language models (SLM) to map lexical, syntactic, and phonological information processing in the brain. PLoS One, 12(5): e0177794. doi:10.1371/journal.pone.0177794.

    Abstract

    Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.

    Additional information

    Data availability
  • Martin, A. E., Huettig, F., & Nieuwland, M. S. (2017). Can structural priming answer the important questions about language? A commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e304. doi:10.1017/S0140525X17000528.

    Abstract

    While structural priming makes a valuable contribution to psycholinguistics, it does not allow direct observation of representation, nor escape “source ambiguity.” Structural priming taps into implicit memory representations and processes that may differ from what is used online. We question whether implicit memory for language can and should be equated with linguistic representation or with language processing.
  • Nieuwland, M. S., & Martin, A. E. (2017). Neural oscillations and a nascent corticohippocampal theory of reference. Journal of Cognitive Neuroscience, 29(5), 896-910. doi:10.1162/jocn_a_01091.

    Abstract

    The ability to use words to refer to the world is vital to the communicative power of human language. In particular, the anaphoric use of words to refer to previously mentioned concepts (antecedents) allows dialogue to be coherent and meaningful. Psycholinguistic theory posits that anaphor comprehension involves reactivating a memory representation of the antecedent. Whereas this implies the involvement of recognition memory, or the mnemonic sub-routines by which people distinguish old from new, the neural processes for reference resolution are largely unknown. Here, we report time-frequency analysis of four EEG experiments to reveal the increased coupling of functional neural systems associated with referentially coherent expressions compared to referentially problematic expressions. Despite varying in modality, language, and type of referential expression, all experiments showed larger gamma-band power for referentially coherent expressions compared to referentially problematic expressions. Beamformer analysis in high-density Experiment 4 localised the gamma-band increase to posterior parietal cortex around 400-600 ms after anaphor-onset and to frontaltemporal cortex around 500-1000 ms. We argue that the observed gamma-band power increases reflect successful referential binding and resolution, which links incoming information to antecedents through an interaction between the brain’s recognition memory networks and frontal-temporal language network. We integrate these findings with previous results from patient and neuroimaging studies, and we outline a nascent cortico-hippocampal theory of reference.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2017). Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues. Neuropsychologia, 95, 21-29. doi:10.1016/j.neuropsychologia.2016.12.004.

    Abstract

    In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects.
  • Rommers, J., Dickson, D. S., Norton, J. J. S., Wlotko, E. W., & Federmeier, K. D. (2017). Alpha and theta band dynamics related to sentential constraint and word expectancy. Language, Cognition and Neuroscience, 32(5), 576-589. doi:10.1080/23273798.2016.1183799.

    Abstract

    Despite strong evidence for prediction during language comprehension, the underlying
    mechanisms, and the extent to which they are specific to language, remain unclear. Re-analysing
    an event-related potentials study, we examined responses in the time-frequency domain to
    expected and unexpected (but plausible) words in strongly and weakly constraining sentences,
    and found results similar to those reported in nonverbal domains. Relative to expected words,
    unexpected words elicited an increase in the theta band (4–7 Hz) in strongly constraining
    contexts, suggesting the involvement of control processes to deal with the consequences of
    having a prediction disconfirmed. Prior to critical word onset, strongly constraining sentences
    exhibited a decrease in the alpha band (8–12 Hz) relative to weakly constraining sentences,
    suggesting that comprehenders can take advantage of predictive sentence contexts to prepare
    for the input. The results suggest that the brain recruits domain-general preparation and control
    mechanisms when making and assessing predictions during sentence comprehension
  • Rommers, J., Meyer, A. S., & Praamstra, P. (2017). Lateralized electrical brain activity reveals covert attention allocation during speaking. Neuropsychologia, 95, 101-110. doi:10.1016/j.neuropsychologia.2016.12.013.

    Abstract

    Speakers usually begin to speak while only part of the utterance has been planned. Earlier work has shown that speech planning processes are reflected in speakers’ eye movements as they describe visually presented objects. However, to-be-named objects can be processed to some extent before they have been fixated upon, presumably because attention can be allocated to objects covertly, without moving the eyes. The present study investigated whether EEG could track speakers’ covert attention allocation as they produced short utterances to describe pairs of objects (e.g., “dog and chair”). The processing difficulty of each object was varied by presenting it in upright orientation (easy) or in upside down orientation (difficult). Background squares flickered at different frequencies in order to elicit steady-state visual evoked potentials (SSVEPs). The N2pc component, associated with the focusing of attention on an item, was detectable not only prior to speech onset, but also during speaking. The time course of the N2pc showed that attention shifted to each object in the order of mention prior to speech onset. Furthermore, greater processing difficulty increased the time speakers spent attending to each object. This demonstrates that the N2pc can track covert attention allocation in a naming task. In addition, an effect of processing difficulty at around 200–350 ms after stimulus onset revealed early attention allocation to the second to-be-named object. The flickering backgrounds elicited SSVEPs, but SSVEP amplitude was not influenced by processing difficulty. These results help complete the picture of the coordination of visual information uptake and motor output during speaking.
  • Schoffelen, J.-M., Hulten, A., Lam, N. H. L., Marquand, A. F., Udden, J., & Hagoort, P. (2017). Frequency-specific directed interactions in the human brain network for language. Proceedings of the National Academy of Sciences of the United States of America, 114(30), 8083-8088. doi:10.1073/pnas.1703155114.

    Abstract

    The brain’s remarkable capacity for language requires bidirectional interactions between functionally specialized brain regions. We used magnetoencephalography to investigate interregional interactions in the brain network for language while 102 participants were reading sentences. Using Granger causality analysis, we identified inferior frontal cortex and anterior temporal regions to receive widespread input and middle temporal regions to send widespread output. This fits well with the notion that these regions play a central role in language processing. Characterization of the functional topology of this network, using data-driven matrix factorization, which allowed for partitioning into a set of subnetworks, revealed directed connections at distinct frequencies of interaction. Connections originating from temporal regions peaked at alpha frequency, whereas connections originating from frontal and parietal regions peaked at beta frequency. These findings indicate that the information flow between language-relevant brain areas, which is required for linguistic processing, may depend on the contributions of distinct brain rhythms

    Additional information

    pnas.201703155SI.pdf

Share this page