Publications

Displaying 1 - 100 of 127
  • Aparicio, X., Heidlmayr, K., & Isel, F. (2017). Inhibition efficiency in highly proficient bilinguals and simultaneous interpreters: Evidence from language switching and stroop tasks. Journal of Psycholinguistic Research, 46, 1427-1451. doi:10.1007/s10936-017-9501-3.

    Abstract

    The present behavioral study aimed to examine the impact of language control expertise on two domain-general control processes, i.e. active inhibition of competing representations and overcoming of inhibition. We compared how Simultaneous Interpreters (SI) and Highly Proficient Bilinguals—two groups assumed to differ in language control capacity—performed executive tasks involving specific inhibition processes. In Experiment 1 (language decision task), both active and overcoming of inhibition processes are involved, while in Experiment 2 (bilingual Stroop task) only interference suppression is supposed to be required. The results of Experiment 1 showed a language switching effect only for the highly proficient bilinguals, potentially because overcoming of inhibition requires more cognitive resources than in SI. Nevertheless, both groups performed similarly on the Stroop task in Experiment 2, which suggests that active inhibition may work similarly in both groups. These contrasting results suggest that overcoming of inhibition may be harder to master than active inhibition. Taken together, these data indicate that some executive control processes may be less sensitive to the degree of expertise in bilingual language control than others. Our findings lend support to psycholinguistic models of bilingualism postulating a higher-order mechanism regulating language activation.
  • Armeni, K., Willems, R. M., & Frank, S. (2017). Probabilistic language models in cognitive neuroscience: Promises and pitfalls. Neuroscience and Biobehavioral Reviews, 83, 579-588. doi:10.1016/j.neubiorev.2017.09.001.

    Abstract

    Cognitive neuroscientists of language comprehension study how neural computations relate to cognitive computations during comprehension. On the cognitive part of the equation, it is important that the computations and processing complexity are explicitly defined. Probabilistic language models can be used to give a computationally explicit account of language complexity during comprehension. Whereas such models have so far predominantly been evaluated against behavioral data, only recently have the models been used to explain neurobiological signals. Measures obtained from these models emphasize the probabilistic, information-processing view of language understanding and provide a set of tools that can be used for testing neural hypotheses about language comprehension. Here, we provide a cursory review of the theoretical foundations and example neuroimaging studies employing probabilistic language models. We highlight the advantages and potential pitfalls of this approach and indicate avenues for future research
  • De Boer, M., Kokal, I., Blokpoel, M., Liu, R., Stolk, A., Roelofs, K., Van Rooij, I., & Toni, I. (2017). Oxytocin modulates human communication by enhancing cognitive exploration. Psychoneuroendocrinology, 86, 64-72. doi:10.1016/j.psyneuen.2017.09.010.

    Abstract

    Oxytocin is a neuropeptide known to influence how humans share material resources. Here we explore whether oxytocin influences how we share knowledge. We focus on two distinguishing features of human communication, namely the ability to select communicative signals that disambiguate the many-to-many mappings that exist between a signal’s form and meaning, and adjustments of those signals to the presumed cognitive characteristics of the addressee (“audience design”). Fifty-five males participated in a randomized, double-blind, placebo controlled experiment involving the intranasal administration of oxytocin. The participants produced novel non-verbal communicative signals towards two different addressees, an adult or a child, in an experimentally-controlled live interactive setting. We found that oxytocin administration drives participants to generate signals of higher referential quality, i.e. signals that disambiguate more communicative problems; and to rapidly adjust those communicative signals to what the addressee understands. The combined effects of oxytocin on referential quality and audience design fit with the notion that oxytocin administration leads participants to explore more pervasively behaviors that can convey their intention, and diverse models of the addressees. These findings suggest that, besides affecting prosocial drive and salience of social cues, oxytocin influences how we share knowledge by promoting cognitive exploration
  • Bouhali, F., Mongelli, V., & Cohen, L. (2017). Musical literacy shifts asymmetries in the ventral visual cortex. NeuroImage, 156, 445-455. doi:10.1016/j.neuroimage.2017.04.027.

    Abstract

    The acquisition of literacy has a profound impact on the functional specialization and lateralization of the visual cortex. Due to the overall lateralization of the language network, specialization for printed words develops in the left occipitotemporal cortex, allegedly inducing a secondary shift of visual face processing to the right, in literate as compared to illiterate subjects. Applying the same logic to the acquisition of high-level musical literacy, we predicted that, in musicians as compared to non-musicians, occipitotemporal activations should show a leftward shift for music reading, and an additional rightward push for face perception. To test these predictions, professional musicians and non-musicians viewed pictures of musical notation, faces, words, tools and houses in the MRI, and laterality was assessed in the ventral stream combining ROI and voxel-based approaches. The results supported both predictions, and allowed to locate the leftward shift to the inferior temporal gyrus and the rightward shift to the fusiform cortex. Moreover, these laterality shifts generalized to categories other than music and faces. Finally, correlation measures across subjects did not support a causal link between the leftward and rightward shifts. Thus the acquisition of an additional perceptual expertise extensively modifies the laterality pattern in the visual system

    Additional information

    1-s2.0-S1053811917303208-mmc1.docx

    Files private

    Request files
  • Bulut, T., Hung, Y., Tzeng, O., & Wu, D. (2017). Neural correlates of processing sentences and compound words in Chinese. PLOS ONE, 12(12): e0188526. doi:10.1371/journal.pone.0188526.
  • Coco, M. I., Araujo, S., & Petersson, K. M. (2017). Disentangling stimulus plausibility and contextual congruency: Electro-physiological evidence for differential cognitive dynamics. Neuropsychologia, 96, 150-163. doi:10.1016/j.neuropsychologia.2016.12.008.

    Abstract

    Expectancy mechanisms are routinely used by the cognitive system in stimulus processing and in anticipation of appropriate responses. Electrophysiology research has documented negative shifts of brain activity when expectancies are violated within a local stimulus context (e.g., reading an implausible word in a sentence) or more globally between consecutive stimuli (e.g., a narrative of images with an incongruent end). In this EEG study, we examine the interaction between expectancies operating at the level of stimulus plausibility and at more global level of contextual congruency to provide evidence for, or against, a disassociation of the underlying processing mechanisms. We asked participants to verify the congruency of pairs of cross-modal stimuli (a sentence and a scene), which varied in plausibility. ANOVAs on ERP amplitudes in selected windows of interest show that congruency violation has longer-lasting (from 100 to 500 ms) and more widespread effects than plausibility violation (from 200 to 400 ms). We also observed critical interactions between these factors, whereby incongruent and implausible pairs elicited stronger negative shifts than their congruent counterpart, both early on (100–200 ms) and between 400–500 ms. Our results suggest that the integration mechanisms are sensitive to both global and local effects of expectancy in a modality independent manner. Overall, we provide novel insights into the interdependence of expectancy during meaning integration of cross-modal stimuli in a verification task
  • Dai, B., McQueen, J. M., Hagoort, P., & Kösem, A. (2017). Pure linguistic interference during comprehension of competing speech signals. The Journal of the Acoustical Society of America, 141, EL249-EL254. doi:10.1121/1.4977590.

    Abstract

    Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at −3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in‐speech comprehension
  • Fitz, H., & Chang, F. (2017). Meaningful questions: The acquisition of auxiliary inversion in a connectionist model of sentence production. Cognition, 166, 225-250. doi:10.1016/j.cognition.2017.05.008.

    Abstract

    Nativist theories have argued that language involves syntactic principles which are unlearnable from the input children receive. A paradigm case of these innate principles is the structure dependence of auxiliary inversion in complex polar questions (Chomsky, 1968, 1975, 1980). Computational approaches have focused on the properties of the input in explaining how children acquire these questions. In contrast, we argue that messages are structured in a way that supports structure dependence in syntax. We demonstrate this approach within a connectionist model of sentence production (Chang, 2009) which learned to generate a range of complex polar questions from a structured message without positive exemplars in the input. The model also generated different types of error in development that were similar in magnitude to those in children (e.g., auxiliary doubling, Ambridge, Rowland, & Pine, 2008; Crain & Nakayama, 1987). Through model comparisons we trace how meaning constraints and linguistic experience interact during the acquisition of auxiliary inversion. Our results suggest that auxiliary inversion rules in English can be acquired without innate syntactic principles, as long as it is assumed that speakers who ask complex questions express messages that are structured into multiple propositions
  • Frank, S. L., & Willems, R. M. (2017). Word predictability and semantic similarity show distinct patterns of brain activity during language comprehension. Language, Cognition and Neuroscience, 32(9), 1192-1203. doi:10.1080/23273798.2017.1323109.

    Abstract

    We investigate the effects of two types of relationship between the words of a sentence or text – predictability and semantic similarity – by reanalysing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data from studies in which participants comprehend naturalistic stimuli. Each content word's predictability given previous words is quantified by a probabilistic language model, and semantic similarity to previous words is quantified by a distributional semantics model. Brain activity time-locked to each word is regressed on the two model-derived measures. Results show that predictability and semantic similarity have near identical N400 effects but are dissociated in the fMRI data, with word predictability related to activity in, among others, the visual word-form area, and semantic similarity related to activity in areas associated with the semantic network. This indicates that both predictability and similarity play a role during natural language comprehension and modulate distinct cortical regions.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.

    Abstract

    An important part of understanding speech motor control consists of capturing the
    interaction between speech production and speech perception. This study tests a
    prediction of theoretical frameworks that have tried to account for these interactions: if
    speech production targets are specified in auditory terms, individuals with better
    auditory acuity should have more precise speech targets, evidenced by decreased
    within-phoneme variability and increased between-phoneme distance. A study was
    carried out consisting of perception and production tasks in counterbalanced order.
    Auditory acuity was assessed using an adaptive speech discrimination task, while
    production variability was determined using a pseudo-word reading task. Analyses of
    the production data were carried out to quantify average within-phoneme variability as
    well as average between-phoneme contrasts. Results show that individuals not only
    vary in their production and perceptual abilities, but that better discriminators have
    more distinctive vowel production targets (that is, targets with less within-phoneme
    variability and greater between-phoneme distances), confirming the initial hypothesis.
    This association between speech production and perception did not depend on local
    phoneme density in vowel space. This study suggests that better auditory acuity leads
    to more precise speech production targets, which may be a consequence of auditory
    feedback affecting speech production over time.
  • Grabot, L., Kösem, A., Azizi, L., & Van Wassenhove, V. (2017). Prestimulus Alpha Oscillations and the Temporal Sequencing of Audio-visual Events. Journal of Cognitive Neuroscience, 29(9), 1566-1582. doi:10.1162/jocn_a_01145.

    Abstract

    Perceiving the temporal order of sensory events typically depends on participants' attentional state, thus likely on the endogenous fluctuations of brain activity. Using magnetoencephalography, we sought to determine whether spontaneous brain oscillations could disambiguate the perceived order of auditory and visual events presented in close temporal proximity, that is, at the individual's perceptual order threshold (Point of Subjective Simultaneity [PSS]). Two neural responses were found to index an individual's temporal order perception when contrasting brain activity as a function of perceived order (i.e., perceiving the sound first vs. perceiving the visual event first) given the same physical audiovisual sequence. First, average differences in prestimulus auditory alpha power indicated perceiving the correct ordering of audiovisual events irrespective of which sensory modality came first: a relatively low alpha power indicated perceiving auditory or visual first as a function of the actual sequence order. Additionally, the relative changes in the amplitude of the auditory (but not visual) evoked responses were correlated with participant's correct performance. Crucially, the sign of the magnitude difference in prestimulus alpha power and evoked responses between perceived audiovisual orders correlated with an individual's PSS. Taken together, our results suggest that spontaneous oscillatory activity cannot disambiguate subjective temporal order without prior knowledge of the individual's bias toward perceiving one or the other sensory modality first. Altogether, our results suggest that, under high perceptual uncertainty, the magnitude of prestimulus alpha (de)synchronization indicates the amount of compensation needed to overcome an individual's prior in the serial ordering and temporal sequencing of information
  • Hagoort, P. (2017). Don't forget neurobiology: An experimental approach to linguistic representation. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e292. doi:10.1017/S0140525X17000401.

    Abstract

    Acceptability judgments are no longer acceptable as the holy grail for testing the nature of linguistic representations. Experimental and quantitative methods should be used to test theoretical claims in psycholinguistics. These methods should include not only behavior, but also the more recent possibilities to probe the neural codes for language-relevant representation
  • Hagoort, P. (2017). The core and beyond in the language-ready brain. Neuroscience and Biobehavioral Reviews, 81, 194-204. doi:10.1016/j.neubiorev.2017.01.048.

    Abstract

    In this paper a general cognitive architecture of spoken language processing is specified. This is followed by an account of how this cognitive architecture is instantiated in the human brain. Both the spatial aspects of the networks for language are discussed, as well as the temporal dynamics and the underlying neurophysiology. A distinction is proposed between networks for coding/decoding linguistic information and additional networks for getting from coded meaning to speaker meaning, i.e. for making the inferences that enable the listener to understand the intentions of the speaker
  • Hartung, F., Hagoort, P., & Willems, R. M. (2017). Readers select a comprehension mode independent of pronoun: Evidence from fMRI during narrative comprehension. Brain and Language, 170, 29-38. doi:10.1016/j.bandl.2017.03.007.

    Abstract

    Perspective is a crucial feature for communicating about events. Yet it is unclear how linguistically encoded perspective relates to cognitive perspective taking. Here, we tested the effect of perspective taking with short literary stories. Participants listened to stories with 1st or 3rd person pronouns referring to the protagonist, while undergoing fMRI. When comparing action events with 1st and 3rd person pronouns, we found no evidence for a neural dissociation depending on the pronoun. A split sample approach based on the self-reported experience of perspective taking revealed 3 comprehension preferences. One group showed a strong 1st person preference, another a strong 3rd person preference, while a third group engaged in 1st and 3rd person perspective taking simultaneously. Comparing brain activations of the groups revealed different neural networks. Our results suggest that comprehension is perspective dependent, but not on the perspective suggested by the text, but on the reader’s (situational) preference
  • Hartung, F., Withers, P., Hagoort, P., & Willems, R. M. (2017). When fiction is just as real as fact: No differences in reading behavior between stories believed to be based on true or fictional events. Frontiers in Psychology, 8: 1618. doi:10.3389/fpsyg.2017.01618.

    Abstract

    Experiments have shown that compared to fictional texts, readers read factual texts faster and have better memory for described situations. Reading fictional texts on the other hand seems to improve memory for exact wordings and expressions. Most of these studies used a ‘newspaper’ versus ‘literature’ comparison. In the present study, we investigated the effect of reader’s expectation to whether information is true or fictional with a subtler manipulation by labelling short stories as either based on true or fictional events. In addition, we tested whether narrative perspective or individual preference in perspective taking affects reading true or fictional stories differently. In an online experiment, participants (final N=1742) read one story which was introduced as based on true events or as fictional (factor fictionality). The story could be narrated in either 1st or 3rd person perspective (factor perspective). We measured immersion in and appreciation of the story, perspective taking, as well as memory for events. We found no evidence that knowing a story is fictional or based on true events influences reading behavior or experiential aspects of reading. We suggest that it is not whether a story is true or fictional, but rather expectations towards certain reading situations (e.g. reading newspaper or literature) which affect behavior by activating appropriate reading goals. Results further confirm that narrative perspective partially influences perspective taking and experiential aspects of reading
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). How social opinion influences syntactic processing – An investigation using virtual reality. PLoS One, 12(4): e0174405. doi:10.1371/journal.pone.0174405.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). In dialogue with an avatar, language behavior is identical to dialogue with a human partner. Behavior Research Methods, 49(1), 46-60. doi:10.3758/s13428-015-0688-7.

    Abstract

    The use of virtual reality (VR) as a methodological tool is becoming increasingly popular in behavioral research as its flexibility allows for a wide range of applications. This new method has not been as widely accepted in the field of psycholinguistics, however, possibly due to the assumption that language processing during human-computer interactions does not accurately reflect human-human interactions. Yet at the same time there is a growing need to study human-human language interactions in a tightly controlled context, which has not been possible using existing methods. VR, however, offers experimental control over parameters that cannot be (as finely) controlled in the real world. As such, in this study we aim to show that human-computer language interaction is comparable to human-human language interaction in virtual reality. In the current study we compare participants’ language behavior in a syntactic priming task with human versus computer partners: we used a human partner, a human-like avatar with human-like facial expressions and verbal behavior, and a computer-like avatar which had this humanness removed. As predicted, our study shows comparable priming effects between the human and human-like avatar suggesting that participants attributed human-like agency to the human-like avatar. Indeed, when interacting with the computer-like avatar, the priming effect was significantly decreased. This suggests that when interacting with a human-like avatar, sentence processing is comparable to interacting with a human partner. Our study therefore shows that VR is a valid platform for conducting language research and studying dialogue interactions in an ecologically valid manner.
  • Heyselaar, E., Segaert, K., Walvoort, S. J., Kessels, R. P., & Hagoort, P. (2017). The role of nondeclarative memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Neuropsychologia, 101, 97-105. doi:10.1016/j.neuropsychologia.2017.04.033.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing patients with Korsakoff's amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education, and premorbid intelligence. Patients with Korsakoff's syndrome display deficits in all subdomains of declarative memory, yet their nondeclarative memory remains intact, making them an ideal patient group to determine which memory system supports syntactic priming. In line with the hypothesis that syntactic priming relies on nondeclarative memory, the patient group shows strong priming tendencies (12.6% passive structure repetition). Our healthy control group did not show a priming tendency, presumably due to cognitive interference between declarative and nondeclarative memory. We discuss the results in relation to amnesia, aging, and compensatory mechanisms.
  • Hirschmann, J., Schoffelen, J.-M., Schnitzler, A., & Van Gerven, M. A. J. (2017). Parkinsonian rest tremor can be detected accurately based on neuronal oscillations recorded from the subthalamic nucleus. Clinical Neurophysiology, 128, 2029-2036. doi:10.1016/j.clinph.2017.07.419.

    Abstract

    Objective: To investigate the possibility of tremor detection based on deep brain activity.
    Methods: We re-analyzed recordings of local field potentials (LFPs) from the subthalamic nucleus in 10
    PD patients (12 body sides) with spontaneously fluctuating rest tremor. Power in several frequency bands
    was estimated and used as input to Hidden Markov Models (HMMs) which classified short data segments
    as either tremor-free rest or rest tremor. HMMs were compared to direct threshold application to individual
    power features.
    Results: Applying a threshold directly to band-limited power was insufficient for tremor detection (mean
    area under the curve [AUC] of receiver operating characteristic: 0.64, STD: 0.19). Multi-feature HMMs, in
    contrast, allowed for accurate detection (mean AUC: 0.82, STD: 0.15), using four power features obtained
    from a single contact pair. Within-patient training yielded better accuracy than across-patient training
    (0.84 vs. 0.78, p = 0.03), yet tremor could often be detected accurately with either approach. High frequency
    oscillations (>200 Hz) were the best performing individual feature.
    Conclusions: LFP-based markers of tremor are robust enough to allow for accurate tremor detection in
    short data segments, provided that appropriate statistical models are used.
    Significance: LFP-based markers of tremor could be useful control signals for closed-loop deep brain
    stimulation.
  • Ito, A., Martin, A. E., & Nieuwland, M. S. (2017). How robust are prediction effects in language comprehension? Failure to replicate article-elicited N400 effects. Language, Cognition and Neuroscience, 32, 954-965. doi:10.1080/23273798.2016.1242761.

    Abstract

    Current psycholinguistic theory proffers prediction as a central, explanatory mechanism in language
    processing. However, widely-replicated prediction effects may not mean that prediction is
    necessary in language processing. As a case in point, C. D. Martin et al. [2013. Bilinguals reading
    in their second language do not predict upcoming words as native readers do.
    Journal of
    Memory and Language, 69
    (4), 574

    588. doi:10.1016/j.jml.2013.08.001] reported ERP evidence for
    prediction in native- but not in non-native speakers. Articles mismatching an expected noun
    elicited larger negativity in the N400 time window compared to articles matching the expected
    noun in native speakers only. We attempted to replicate these findings, but found no evidence
    for prediction irrespective of language nativeness. We argue that pre-activation of phonological
    form of upcoming nouns, as evidenced in article-elicited effects, may not be a robust
    phenomenon. A view of prediction as a necessary computation in language comprehension
    must be re-evaluated.
  • Ito, A., Martin, A. E., & Nieuwland, M. S. (2017). Why the A/AN prediction effect may be hard to replicate: A rebuttal to DeLong, Urbach & Kutas (2017). Language, Cognition and Neuroscience, 32(8), 974-983. doi:10.1080/23273798.2017.1323112.
  • Kita, S., Alibali, M. W., & Chu, M. (2017). How Do Gestures Influence Thinking and Speaking? The Gesture-for-Conceptualization Hypothesis. Psychological Review, 124(3), 245-266. doi:10.1037/rev0000059.

    Abstract

    People spontaneously produce gestures during speaking and thinking. The authors focus here on gestures that depict or indicate information related to the contents of concurrent speech or thought (i.e., representational gestures). Previous research indicates that such gestures have not only communicative functions, but also self-oriented cognitive functions. In this article, the authors propose a new theoretical framework, the gesture-for-conceptualization hypothesis, which explains the self-oriented functions of representational gestures. According to this framework, representational gestures affect cognitive processes in 4 main ways: gestures activate, manipulate, package, and explore spatio-motoric information for speaking and thinking. These four functions are shaped by gesture's ability to schematize information, that is, to focus on a small subset of available information that is potentially relevant to the task at hand. The framework is based on the assumption that gestures are generated from the same system that generates practical actions, such as object manipulation; however, gestures are distinct from practical actions in that they represent information. The framework provides a novel, parsimonious, and comprehensive account of the self-oriented functions of gestures. The authors discuss how the framework accounts for gestures that depict abstract or metaphoric content, and they consider implications for the relations between self-oriented and communicative functions of gestures
  • Kösem, A., & Van Wassenhove, V. (2017). Distinct contributions of low and high frequency neural oscillations to speech comprehension. Language, Cognition and Neuroscience, 32(5), 536-544. doi:10.1080/23273798.2016.1238495.

    Abstract

    In the last decade, the involvement of neural oscillatory mechanisms in speech comprehension has been increasingly investigated. Current evidence suggests that low-frequency and high-frequency neural entrainment to the acoustic dynamics of speech are linked to its analysis. One crucial question is whether acoustical processing primarily modulates neural entrainment, or whether entrainment instead reflects linguistic processing. Here, we review studies investigating the effect of linguistic manipulations on neural oscillatory activity. In light of the current findings, we argue that theta (3–8 Hz) entrainment may primarily reflect the analysis of the acoustic features of speech. In contrast, recent evidence suggests that delta (1–3 Hz) and high-frequency activity (>40 Hz) are reliable indicators of perceived linguistic representations. The interdependence between low-frequency and high-frequency neural oscillations, as well as their causal role on speech comprehension, is further discussed with regard to neurophysiological models of speech processing
  • Kunert, R., & Jongman, S. R. (2017). Entrainment to an auditory signal: Is attention involved? Journal of Experimental Psychology: General, 146(1), 77-88. doi:10.1037/xge0000246.

    Abstract

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment.
  • Lam, K. J. Y., Bastiaansen, M. C. M., Dijkstra, T., & Rueschemeyer, S. A. (2017). Making sense: motor activation and action plausibility during sentence processing. Language, Cognition and Neuroscience, 32(5), 590-600. doi:10.1080/23273798.2016.1164323.

    Abstract

    The current electroencephalography study investigated the relationship between the motor and (language) comprehension systems by simultaneously measuring mu and N400 effects. Specifically, we examined whether the pattern of motor activation elicited by verbs depends on the larger sentential context. A robust N400 congruence effect confirmed the contextual manipulation of action plausibility, a form of semantic congruency. Importantly, this study showed that: (1) Action verbs elicited more mu power decrease than non-action verbs when sentences described plausible actions. Action verbs thus elicited more motor activation than non-action verbs. (2) In contrast, when sentences described implausible actions, mu activity was present but the difference between the verb types was not observed. The increased processing associated with a larger N400 thus coincided with mu activity in sentences describing implausible actions. Altogether, context-dependent motor activation appears to play a functional role in deriving context-sensitive meaning
  • Lewis, A. G., Schoffelen, J.-M., Hoffmann, C., Bastiaansen, M. C. M., & Schriefers, H. (2017). Discourse-level semantic coherence influences beta oscillatory dynamics and the N400 during sentence comprehension. Language, Cognition and Neuroscience, 32(5), 601-617. doi:10.1080/23273798.2016.1211300.

    Abstract

    In this study, we used electroencephalography to investigate the influence of discourse-level semantic coherence on electrophysiological signatures of local sentence-level processing. Participants read groups of four sentences that could either form coherent stories or were semantically unrelated. For semantically coherent discourses compared to incoherent ones, the N400 was smaller at sentences 2–4, while the visual N1 was larger at the third and fourth sentences. Oscillatory activity in the beta frequency range (13–21 Hz) was higher for coherent discourses. We relate the N400 effect to a disruption of local sentence-level semantic processing when sentences are unrelated. Our beta findings can be tentatively related to disruption of local sentence-level syntactic processing, but it cannot be fully ruled out that they are instead (or also) related to disrupted local sentence-level semantic processing. We conclude that manipulating discourse-level semantic coherence does have an effect on oscillatory power related to local sentence-level processing.
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., & Willems, R. M. (2017). Using stochastic language models (SLM) to map lexical, syntactic, and phonological information processing in the brain. PLoS One, 12(5): e0177794. doi:10.1371/journal.pone.0177794.

    Abstract

    Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.

    Additional information

    Data availability
  • Martin, A. E., Huettig, F., & Nieuwland, M. S. (2017). Can structural priming answer the important questions about language? A commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e304. doi:10.1017/S0140525X17000528.

    Abstract

    While structural priming makes a valuable contribution to psycholinguistics, it does not allow direct observation of representation, nor escape “source ambiguity.” Structural priming taps into implicit memory representations and processes that may differ from what is used online. We question whether implicit memory for language can and should be equated with linguistic representation or with language processing.
  • Nieuwland, M. S., & Martin, A. E. (2017). Neural oscillations and a nascent corticohippocampal theory of reference. Journal of Cognitive Neuroscience, 29(5), 896-910. doi:10.1162/jocn_a_01091.

    Abstract

    The ability to use words to refer to the world is vital to the communicative power of human language. In particular, the anaphoric use of words to refer to previously mentioned concepts (antecedents) allows dialogue to be coherent and meaningful. Psycholinguistic theory posits that anaphor comprehension involves reactivating a memory representation of the antecedent. Whereas this implies the involvement of recognition memory, or the mnemonic sub-routines by which people distinguish old from new, the neural processes for reference resolution are largely unknown. Here, we report time-frequency analysis of four EEG experiments to reveal the increased coupling of functional neural systems associated with referentially coherent expressions compared to referentially problematic expressions. Despite varying in modality, language, and type of referential expression, all experiments showed larger gamma-band power for referentially coherent expressions compared to referentially problematic expressions. Beamformer analysis in high-density Experiment 4 localised the gamma-band increase to posterior parietal cortex around 400-600 ms after anaphor-onset and to frontaltemporal cortex around 500-1000 ms. We argue that the observed gamma-band power increases reflect successful referential binding and resolution, which links incoming information to antecedents through an interaction between the brain’s recognition memory networks and frontal-temporal language network. We integrate these findings with previous results from patient and neuroimaging studies, and we outline a nascent cortico-hippocampal theory of reference.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2017). Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues. Neuropsychologia, 95, 21-29. doi:10.1016/j.neuropsychologia.2016.12.004.

    Abstract

    In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects.
  • Rommers, J., Dickson, D. S., Norton, J. J. S., Wlotko, E. W., & Federmeier, K. D. (2017). Alpha and theta band dynamics related to sentential constraint and word expectancy. Language, Cognition and Neuroscience, 32(5), 576-589. doi:10.1080/23273798.2016.1183799.

    Abstract

    Despite strong evidence for prediction during language comprehension, the underlying
    mechanisms, and the extent to which they are specific to language, remain unclear. Re-analysing
    an event-related potentials study, we examined responses in the time-frequency domain to
    expected and unexpected (but plausible) words in strongly and weakly constraining sentences,
    and found results similar to those reported in nonverbal domains. Relative to expected words,
    unexpected words elicited an increase in the theta band (4–7 Hz) in strongly constraining
    contexts, suggesting the involvement of control processes to deal with the consequences of
    having a prediction disconfirmed. Prior to critical word onset, strongly constraining sentences
    exhibited a decrease in the alpha band (8–12 Hz) relative to weakly constraining sentences,
    suggesting that comprehenders can take advantage of predictive sentence contexts to prepare
    for the input. The results suggest that the brain recruits domain-general preparation and control
    mechanisms when making and assessing predictions during sentence comprehension
  • Rommers, J., Meyer, A. S., & Praamstra, P. (2017). Lateralized electrical brain activity reveals covert attention allocation during speaking. Neuropsychologia, 95, 101-110. doi:10.1016/j.neuropsychologia.2016.12.013.

    Abstract

    Speakers usually begin to speak while only part of the utterance has been planned. Earlier work has shown that speech planning processes are reflected in speakers’ eye movements as they describe visually presented objects. However, to-be-named objects can be processed to some extent before they have been fixated upon, presumably because attention can be allocated to objects covertly, without moving the eyes. The present study investigated whether EEG could track speakers’ covert attention allocation as they produced short utterances to describe pairs of objects (e.g., “dog and chair”). The processing difficulty of each object was varied by presenting it in upright orientation (easy) or in upside down orientation (difficult). Background squares flickered at different frequencies in order to elicit steady-state visual evoked potentials (SSVEPs). The N2pc component, associated with the focusing of attention on an item, was detectable not only prior to speech onset, but also during speaking. The time course of the N2pc showed that attention shifted to each object in the order of mention prior to speech onset. Furthermore, greater processing difficulty increased the time speakers spent attending to each object. This demonstrates that the N2pc can track covert attention allocation in a naming task. In addition, an effect of processing difficulty at around 200–350 ms after stimulus onset revealed early attention allocation to the second to-be-named object. The flickering backgrounds elicited SSVEPs, but SSVEP amplitude was not influenced by processing difficulty. These results help complete the picture of the coordination of visual information uptake and motor output during speaking.
  • Schoffelen, J.-M., Hulten, A., Lam, N. H. L., Marquand, A. F., Udden, J., & Hagoort, P. (2017). Frequency-specific directed interactions in the human brain network for language. Proceedings of the National Academy of Sciences of the United States of America, 114(30), 8083-8088. doi:10.1073/pnas.1703155114.

    Abstract

    The brain’s remarkable capacity for language requires bidirectional interactions between functionally specialized brain regions. We used magnetoencephalography to investigate interregional interactions in the brain network for language while 102 participants were reading sentences. Using Granger causality analysis, we identified inferior frontal cortex and anterior temporal regions to receive widespread input and middle temporal regions to send widespread output. This fits well with the notion that these regions play a central role in language processing. Characterization of the functional topology of this network, using data-driven matrix factorization, which allowed for partitioning into a set of subnetworks, revealed directed connections at distinct frequencies of interaction. Connections originating from temporal regions peaked at alpha frequency, whereas connections originating from frontal and parietal regions peaked at beta frequency. These findings indicate that the information flow between language-relevant brain areas, which is required for linguistic processing, may depend on the contributions of distinct brain rhythms

    Additional information

    pnas.201703155SI.pdf
  • Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2017). Control adjustments in speaking: Electrophysiology of the Gratton effect in picture naming. Cortex, 92, 289-303. doi:10.1016/j.cortex.2017.04.017.

    Abstract

    Accumulating evidence suggests that spoken word production requires different amounts of top-down control depending on the prevailing circumstances. For example, during Stroop-like tasks, the interference in response time (RT) is typically larger following congruent trials than following incongruent trials. This effect is called the Gratton effect, and has been taken to reflect top-down control adjustments based on the previous trial type. Such control adjustments have been studied extensively in Stroop and Eriksen flanker tasks (mostly using manual responses), but not in the picture-word interference (PWI) task, which is a workhorse of language production research. In one of the few studies of the Gratton effect in PWI, Van Maanen and Van Rijn (2010) examined the effect in picture naming RTs during dual-task performance. Based on PWI effect differences between dual-task conditions, they argued that the functional locus of the PWI effect differs between post-congruent trials (i.e., locus in perceptual and conceptual encoding) and post-incongruent trials (i.e., locus in word planning). However, the dual-task procedure may have contaminated the results. We therefore performed an EEG study on the Gratton effect in a regular PWI task. We observed a PWI effect in the RTs, in the N400 component of the event-related brain potentials, and in the midfrontal theta power, regardless of the previous trial type. Moreover, the RTs, N400, and theta power reflected the Gratton effect. These results provide evidence that the PWI effect arises at the word planning stage following both congruent and incongruent trials, while the amount of top-down control changes depending on the previous trial type.
  • Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M. C. M., & Schoffelen, J.-M. (2017). Control adjustments in speaking: Electrophysiology of the Gratton effect in picture naming. Cortex, 92, 289-303. doi:10.1016/j.cortex.2017.04.017.

    Abstract

    Accumulating evidence suggests that spoken word production requires different amounts of top-down control depending on the prevailing circumstances. For example, during Stroop-like tasks, the interference in response time (RT) is typically larger following congruent trials than following incongruent trials. This effect is called the Gratton effect, and has been taken to reflect top-down control adjustments based on the previous trial type. Such control adjustments have been studied extensively in Stroop and Eriksen flanker tasks (mostly using manual responses), but not in the picture–word interference (PWI) task, which is a workhorse of language production research. In one of the few studies of the Gratton effect in PWI, Van Maanen and Van Rijn (2010) examined the effect in picture naming RTs during dual-task performance. Based on PWI effect differences between dual-task conditions, they argued that the functional locus of the PWI effect differs between post-congruent trials (i.e., locus in perceptual and conceptual encoding) and post-incongruent trials (i.e., locus in word planning). However, the dual-task procedure may have contaminated the results. We therefore performed an electroencephalography (EEG) study on the Gratton effect in a regular PWI task. We observed a PWI effect in the RTs, in the N400 component of the event-related brain potentials, and in the midfrontal theta power, regardless of the previous trial type. Moreover, the RTs, N400, and theta power reflected the Gratton effect. These results provide evidence that the PWI effect arises at the word planning stage following both congruent and incongruent trials, while the amount of top-down control changes depending on the previous trial type.
  • Silva, S., Inácio, F., Folia, V., & Petersson, K. M. (2017). Eye movements in implicit artificial grammar learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(9), 1387-1402. doi:10.1037/xlm0000350.

    Abstract

    Artificial grammar learning (AGL) has been probed with forced-choice behavioral tests (active tests). Recent attempts to probe the outcomes of learning (implicitly acquired knowledge) with eye-movement responses (passive tests) have shown null results. However, these latter studies have not tested for sensitivity effects, for example, increased eye movements on a printed violation. In this study, we tested for sensitivity effects in AGL tests with (Experiment 1) and without (Experiment 2) concurrent active tests (preference- and grammaticality classification) in an eye-tracking experiment. Eye movements discriminated between sequence types in passive tests and more so in active tests. The eye-movement profile did not differ between preference and grammaticality classification, and it resembled sensitivity effects commonly observed in natural syntax processing. Our findings show that the outcomes of implicit structured sequence learning can be characterized in eye tracking. More specifically, whole trial measures (dwell time, number of fixations) showed robust AGL effects, whereas first-pass measures (first-fixation duration) did not. Furthermore, our findings strengthen the link between artificial and natural syntax processing, and they shed light on the factors that determine performance differences in preference and grammaticality classification tests
  • Silva, S., Petersson, K. M., & Castro, S. L. (2017). The effects of ordinal load on incidental temporal learning. Quarterly Journal of Experimental Psychology, 70(4), 664-674. doi:10.1080/17470218.2016.1146909.

    Abstract

    How can we grasp the temporal structure of events? A few studies have indicated that representations of temporal structure are acquired when there is an intention to learn, but not when learning is incidental. Response-to-stimulus intervals, uncorrelated temporal structures, unpredictable ordinal information, and lack of metrical organization have been pointed out as key obstacles to incidental temporal learning, but the literature includes piecemeal demonstrations of learning under all these circumstances. We suggest that the unacknowledged effects of ordinal load may help reconcile these conflicting findings, ordinal load referring to the cost of identifying the sequence of events (e.g., tones, locations) where a temporal pattern is embedded. In a first experiment, we manipulated ordinal load into simple and complex levels. Participants learned ordinal-simple sequences, despite their uncorrelated temporal structure and lack of metrical organization. They did not learn ordinal-complex sequences, even though there were no response-to-stimulus intervals nor unpredictable ordinal information. In a second experiment, we probed learning of ordinal-complex sequences with strong metrical organization, and again there was no learning. We conclude that ordinal load is a key obstacle to incidental temporal learning. Further analyses showed that the effect of ordinal load is to mask the expression of temporal knowledge, rather than to prevent learning.
  • Silva, S., Folia, V., Hagoort, P., & Petersson, K. M. (2017). The P600 in Implicit Artificial Grammar Learning. Cognitive Science, 41(1), 137-157. doi:10.1111/cogs.12343.

    Abstract

    The suitability of the Artificial Grammar Learning (AGL) paradigm to capture relevant aspects of the acquisition of linguistic structures has been empirically tested in a number of EEG studies. Some have shown a syntax-related P600 component, but it has not been ruled out that the AGL P600 effect is a response to surface features (e.g., subsequence familiarity) rather than the underlying syntax structure. Therefore, in this study, we controlled for the surface characteristics of the test sequences (associative chunk strength) and recorded the EEG before (baseline preference classification) and
    after (preference and grammaticality classification) exposure to a grammar. A typical, centroparietal P600 effect was elicited by grammatical violations after exposure, suggesting that the AGL P600 effect signals a response to structural irregularities. Moreover, preference and grammaticality classification showed a qualitatively similar ERP profile, strengthening the idea that the implicit structural mere
    exposure paradigm in combination with preference classification is a suitable alternative to the traditional grammaticality classification test.
  • Simon, E., & Sjerps, M. J. (2017). Phonological category quality in the mental lexicon of child and adult learners. International Journal of Bilingualism, 21(4), 474-499. doi:10.1177/1367006915626589.

    Abstract

    Aims and objectives: The aim was to identify which criteria children use to decide on the category membership of native and non-native vowels, and to get insight into the organization of phonological representations in the bilingual mind. Methodology: The study consisted of two cross-language mispronunciation detection tasks in which L2 vowels were inserted into L1 words and vice versa. In Experiment 1, 10- to 12-year-old Dutch-speaking children were presented with Dutch words which were either pronounced with the target Dutch vowel or with an English vowel inserted in the Dutch consonantal frame. Experiment 2 was a mirror of the first, with English words which were pronounced “correctly” or which were “mispronounced” with a Dutch vowel. Data and analysis: Analyses focused on extent to which child and adult listeners accepted substitutions of Dutch vowels by English ones, and vice versa. Findings: The results of Experiment 1 revealed that between the age of ten and twelve children have well-established phonological vowel categories in their native language. However, Experiment 2 showed that in their non-native language, children tended to accept mispronounced items which involve sounds from their native language. At the same time, though, they did not fully rely on their native phonemic inventory because the children accepted most of the correctly pronounced English items. Originality: While many studies have examined native and non-native perception by infants and adults, studies on first and second language perception of school-age children are rare. This study adds to the body of literature aimed at expanding our knowledge in this area. Implications: The study has implications for models of the organization of the bilingual mind: while proficient adult non-native listeners generally have clearly separated sets of phonological representations for their two languages, for non-proficient child learners the L1 phonology still exerts a strong influence on the L2 phonology.
  • Soutschek, A., Burke, C. J., Beharelle, A. R., Schreiber, R., Weber, S. C., Karipidis, I. I., Ten Velden, J., Weber, B., Haker, H., Kalenscher, T., & Tobler, P. N. (2017). The dopaminergic reward system underpins gender differences in social preferences. Nature Human Behaviour, 1, 819-827. doi:10.1038/s41562-017-0226-y.

    Abstract

    Women are known to have stronger prosocial preferences than men, but it remains an open question as to how these behavioural differences arise from differences in brain functioning. Here, we provide a neurobiological account for the hypothesized gender difference. In a pharmacological study and an independent neuroimaging study, we tested the hypothesis that the neural reward system encodes the value of sharing money with others more strongly in women than in men. In the pharmacological study, we reduced receptor type-specific actions of dopamine, a neurotransmitter related to reward processing, which resulted in more selfish decisions in women and more prosocial decisions in men. Converging findings from an independent neuroimaging study revealed gender-related activity in neural reward circuits during prosocial decisions. Thus, the neural reward system appears to be more sensitive to prosocial rewards in women than in men, providing a neurobiological account for why women often behave more prosocially than men.

    A large body of evidence suggests that women are often more prosocial (for example, generous, altruistic and inequality averse) than men, at least when other factors such as reputation and strategic considerations are excluded1,2,3. This dissociation could result from cultural expectations and gender stereotypes, because in Western societies women are more strongly expected to be prosocial4,5,6 and sensitive to variations in social context than men1. It remains an open question, however, whether and how on a neurobiological level the social preferences of women and men arise from differences in brain functioning. The assumption of gender differences in social preferences predicts that the neural reward system’s sensitivity to prosocial and selfish rewards should differ between women and men. Specifically, the hypothesis would be that the neural reward system is more sensitive to prosocial than selfish rewards in women and more sensitive to selfish than prosocial rewards in men. The goal of the current study was to test in two independent experiments for the hypothesized gender differences on both a pharmacological and a haemodynamic level. In particular, we examined the functions of the neurotransmitter dopamine using a dopamine receptor antagonist, and the role of the striatum (a brain region strongly innervated by dopamine neurons) during social decision-making in women and men using neuroimaging.

    The neurotransmitter dopamine is thought to play a key role in neural reward processing7,8. Recent evidence suggests that dopaminergic activity is sensitive not only to rewards for oneself but to rewards for others as well9. The assumption that dopamine is sensitive to both self- and other-related outcomes is consistent with the finding that the striatum shows activation for both selfish and shared rewards10,11,12,13,14,15. The dopaminergic response may represent a net signal encoding the difference between the value of preferred and unpreferred rewards8. Regarding the hypothesized gender differences in social preferences, this account makes the following predictions. If women prefer shared (prosocial) outcomes2, women’s dopaminergic signals to shared rewards will be stronger than to non-shared (selfish) rewards, so reducing dopaminergic activity should bias women to make more selfish decisions. In line with this hypothesis, a functional imaging study reported enhanced striatal activation in female participants during charitable donations11. In contrast, if men prefer selfish over prosocial rewards, dopaminergic activity should be enhanced to selfish compared to prosocial rewards. In line with this view, upregulating dopaminergic activity in a sample of exclusively male participants increased selfish behaviour in a bargaining game16. Thus, contrary to the hypothesized effect in women, reducing dopaminergic neurotransmission should render men more prosocial. Taken together, the current study tested the following three predictions: we expected the dopaminergic reward system (1) to be more sensitive to prosocial than selfish rewards in women and (2) to be more sensitive to selfish than prosocial rewards in men. As a consequence of these two predictions, we also predicted (3) dopaminoceptive regions such as the striatum to show stronger activation to prosocial relative to selfish rewards in women than in men.

    To test these predictions, we conducted a pharmacological study in which we reduced dopaminergic neurotransmission with amisulpride. Amisulpride is a dopamine antagonist that is highly specific for dopaminergic D2/D3 receptors17. After receiving amisulpride or placebo, participants performed an interpersonal decision task18,19,20, in which they made choices between a monetary reward only for themselves (selfish reward option) and sharing money with others (prosocial reward option). We expected that blocking dopaminergic neurotransmission with amisulpride, relative to placebo, would result in fewer prosocial choices in women and more prosocial choices in men. To investigate whether potential gender-related effects of dopamine are selective for social decision-making, we also tested the effects of amisulpride on time preferences in a non-social control task that was matched to the interpersonal decision task in terms of choice structure.

    In addition, because dopaminergic neurotransmission plays a crucial role in brain regions involved in value processing, such as the striatum21, a gender-related role of dopaminergic activity for social decision-making should also be reflected by dissociable activity patterns in the striatum. Therefore, to further test our hypothesis, we investigated the neural correlates of social decision-making in a functional imaging study. In line with our predictions for the pharmacological study, we expected to find stronger striatum activity during prosocial relative to selfish decisions in women, whereas men should show enhanced activity in the striatum for selfish relative to prosocial choices.

    Additional information

    Supplementary Information
  • Ye, Z., Stolk, A., Toni, I., & Hagoort, P. (2017). Oxytocin modulates semantic integration in speech comprehension. Journal of Cognitive Neuroscience, 29, 267-276. doi:10.1162/jocn_a_01044.

    Abstract

    Listeners interpret utterances by integrating information from multiple sources including word level semantics and world knowledge. When the semantics of an expression is inconsistent with his or her knowledge about the world, the listener may have to search through the conceptual space for alternative possible world scenarios that can make the expression more acceptable. Such cognitive exploration requires considerable computational resources and might depend on motivational factors. This study explores whether and how oxytocin, a neuropeptide known to influence socialmotivation by reducing social anxiety and enhancing affiliative tendencies, can modulate the integration of world knowledge and sentence meanings. The study used a betweenparticipant double-blind randomized placebo-controlled design. Semantic integration, indexed with magnetoencephalography through the N400m marker, was quantified while 45 healthymale participants listened to sentences that were either congruent or incongruent with facts of the world, after receiving intranasally delivered oxytocin or placebo. Compared with congruent sentences, world knowledge incongruent sentences elicited a stronger N400m signal from the left inferior frontal and anterior temporal regions and medial pFC (the N400m effect) in the placebo group. Oxytocin administration significantly attenuated the N400meffect at both sensor and cortical source levels throughout the experiment, in a state-like manner. Additional electrophysiological markers suggest that the absence of the N400m effect in the oxytocin group is unlikely due to the lack of early sensory or semantic processing or a general downregulation of attention. These findings suggest that oxytocin drives listeners to resolve challenges of semantic integration, possibly by promoting the cognitive exploration of alternative possible world scenarios.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2017). Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words. Brain and Language, 167, 44-60. doi:10.1016/j.bandl.2016.05.009.

    Abstract

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems
  • Tan, Y., Martin, R. C., & Van Dyke, J. A. (2017). Semantic and syntactic interference in sentence comprehension: A comparison of working memory models. Frontiers in Psychology, 8: 198. doi:10.3389/fpsyg.2017.00198.

    Abstract

    This study investigated the nature of the underlying working memory system supporting sentence processing through examining individual differences in sensitivity to retrieval interference effects during sentence comprehension. Interference effects occur when readers incorrectly retrieve sentence constituents which are similar to those required during integrative processes. We examined interference arising from a partial match between distracting constituents and syntactic and semantic cues, and related these interference effects to performance on working memory, short-term memory (STM), vocabulary, and executive function tasks. For online sentence comprehension, as measured by self-paced reading, the magnitude of individuals' syntactic interference effects was predicted by general WM capacity and the relation remained significant when partialling out vocabulary, indicating that the effects were not due to verbal knowledge. For offline sentence comprehension, as measured by responses to comprehension questions, both general WM capacity and vocabulary knowledge interacted with semantic interference for comprehension accuracy, suggesting that both general WM capacity and the quality of semantic representations played a role in determining how well interference was resolved offline. For comprehension question reaction times, a measure of semantic STM capacity interacted with semantic but not syntactic interference. However, a measure of phonological capacity (digit span) and a general measure of resistance to response interference (Stroop effect) did not predict individuals' interference resolution abilities in either online or offline sentence comprehension. The results are discussed in relation to the multiple capacities account of working memory (e.g., Martin and Romani, 1994; Martin and He, 2004), and the cue-based retrieval parsing approach (e.g., Lewis et al., 2006; Van Dyke et al., 2014). While neither approach was fully supported, a possible means of reconciling the two approaches and directions for future research are proposed.
  • Tsuji, S., Fikkert, P., Minagawa, Y., Dupoux, E., Filippin, L., Versteegh, M., Hagoort, P., & Cristia, A. (2017). The more, the better? Behavioral and neural correlates of frequent and infrequent vowel exposure. Developmental Psychobiology, 59, 603-612. doi:10.1002/dev.21534.

    Abstract

    A central assumption in the perceptual attunement literature holds that exposure to a speech sound contrast leads to improvement in native speech sound processing. However, whether the amount of exposure matters for this process has not been put to a direct test. We elucidated indicators of frequency-dependent perceptual attunement by comparing 5–8-month-old Dutch infants’ discrimination of tokens containing a highly frequent [hɪt-he:t] and a highly infrequent [hʏt-hø:t] native vowel contrast as well as a non-native [hɛt-hæt] vowel contrast in a behavioral visual habituation paradigm (Experiment 1). Infants discriminated both native contrasts similarly well, but did not discriminate the non-native contrast. We sought further evidence for subtle differences in the processing of the two native contrasts using near-infrared spectroscopy and a within-participant design (Experiment 2). The neuroimaging data did not provide additional evidence that responses to native contrasts are modulated by frequency of exposure. These results suggest that even large differences in exposure to a native contrast may not directly translate to behavioral and neural indicators of perceptual attunement, raising the possibility that frequency of exposure does not influence improvements in discriminating native contrasts.

    Additional information

    dev21534-sup-0001-SuppInfo-S1.docx
  • Udden, J., Ingvar, M., Hagoort, P., & Petersson, K. M. (2017). Broca’s region: A causal role in implicit processing of grammars with crossed non-adjacent dependencies. Cognition, 164, 188-198. doi:10.1016/j.cognition.2017.03.010.

    Abstract

    Non-adjacent dependencies are challenging for the language learning machinery and are acquired later than adjacent dependencies. In this transcranial magnetic stimulation (TMS) study, we show that participants successfully discriminated between grammatical and non-grammatical sequences after having implicitly acquired an artificial language with crossed non-adjacent dependencies. Subsequent to transcranial magnetic stimulation of Broca’s region, discrimination was impaired compared to when a language-irrelevant control region (vertex) was stimulated. These results support the view that Broca’s region is engaged in structured sequence processing and extend previous functional neuroimaging results on artificial grammar learning (AGL) in two directions: first, the results establish that Broca’s region is a causal component in the processing of non-adjacent dependencies, and second, they show that implicit processing of non-adjacent dependencies engages Broca’s region. Since patients with lesions in Broca’s region do not always show grammatical processing difficulties, the result that Broca’s region is causally linked to processing of non-adjacent dependencies is a step towards clarification of the exact nature of syntactic deficits caused by lesions or perturbation to Broca’s region. Our findings are consistent with previous results and support a role for Broca’s region in general structured sequence processing, rather than a specific role for the processing of hierarchically organized sentence structure.
  • Udden, J., Snijders, T. M., Fisher, S. E., & Hagoort, P. (2017). A common variant of the CNTNAP2 gene is associated with structural variation in the left superior occipital gyrus. Brain and Language, 172, 16-21. doi:10.1016/j.bandl.2016.02.003.

    Abstract

    The CNTNAP2 gene encodes a cell-adhesion molecule that influences the properties of neural networks and the morphology and density of neurons and glial cells. Previous studies have shown association of CNTNAP2 variants with language-related phenotypes in health and disease. Here, we report associations of a common CNTNAP2 polymorphism (rs7794745) with variation in grey matter in a region in the dorsal visual stream. We tried to replicate an earlier study on 314 subjects by Tan and colleagues (2010), but now in a substantially larger group of more than 1700 subjects. Carriers of the T allele showed reduced grey matter volume in left superior occipital gyrus, while we did not replicate associations with grey matter volume in other regions identified by Tan et al (2010). Our work illustrates the importance of independent replication in neuroimaging genetic studies of language-related candidate genes.
  • Van der Ven, F., Takashima, A., Segers, A., & Verhoeven, L. (2017). Semantic priming in Dutch children: Word meaning integration and study modality effects. Language Learning, 67(3), 546-568. doi:10.1111/lang.12235.
  • Van Bergen, G., & Flecken, M. (2017). Putting things in new places: Linguistic experience modulates the predictive power of placement verb semantics. Journal of Memory and Language, 92, 26-42. doi:10.1016/j.jml.2016.05.003.

    Abstract

    A central question regarding predictive language processing concerns the extent to which linguistic experience modulates the process. We approached this question by investigating sentence processing in advanced second language (L2) users with different native language (L1) backgrounds. Using a visual world eye tracking paradigm, we investigated to what extent L1 and L2 participants showed anticipatory eye movements to objects while listening to Dutch placement event descriptions. L2 groups differed in the degree of similarity between Dutch and their L1 with respect to placement verb semantics: German, like Dutch, specifies object position in placement verbs (put.STAND vs. put.LIE), whereas English and French typically leave position underspecified (put). Results showed that German L2 listeners, like native Dutch listeners, anticipate objects that match the verbally encoded position immediately upon encountering the verb. French/English L2 participants, however, did not show any prediction effects, despite proper understanding of Dutch placement verbs. Our findings suggest that prior experience with a specific semantic contrast in one’s L1 facilitates prediction in L2, and hence adds to the evidence that linguistic experience modulates predictive sentence processing
  • Van Ekert, J., Wegman, J., Jansen, C., Takashima, A., & Janzen, G. (2017). The dynamics of memory consolidation of landmarks. Hippocampus, 27(4), 303-404. doi:10.1002/hipo.22698.

    Abstract

    Navigating through space is fundamental to human nature and requires the ability to retrieve relevant information from the remote past. With the passage of time, some memories become generic, capturing only a sense of familiarity. Yet, others maintain precision, even when acquired decades ago. Understanding the dynamics of memory consolidation is a major challenge to neuroscientists. Using functional magnetic resonance imaging, we systematically examined the effects of time and spatial context on the neural representation of landmark recognition memory. An equal number of male and female subjects (males N = 10, total N = 20) watched a route through a large-scale virtual environment. Landmarks occurred at navigationally relevant and irrelevant locations along the route. Recognition memory for landmarks was tested directly following encoding, 24 h later and 30 days later. Surprisingly, changes over time in the neural representation of navigationally relevant landmarks differed between males and females. In males, relevant landmarks selectively engaged the parahippocampal gyrus (PHG) regardless of the age of the memory. In females, the response to relevant landmarks gradually diminished with time in the PHG but strengthened progressively in the inferior frontal gyrus (IFG). Based on what is known about the functioning of the PHG and IFG, the findings of this study suggest that males maintain access to the initially formed spatial representation of landmarks whereas females become strongly dependent on a verbal representation of landmarks with time. Our findings yield a clear objective for future studies
  • Varma, S., Takashima, A., Krewinkel, S., Van Kooten, M., Fu, L., Medendorp, W. P., Kessels, R. P. C., & Daselaar, S. M. (2017). Non-interfering effects of active post-encoding tasks on episodic memory consolidation in humans. Frontiers in Behavioral Neuroscience, 11: 54. doi:10.3389/fnbeh.2017.00054.

    Abstract

    So far, studies that investigated interference effects of post-learning processes on episodic memory consolidation in humans have used tasks involving only complex and meaningful information. Such tasks require reallocation of general or encoding-specific resources away from consolidation-relevant activities. The possibility that interference can be elicited using a task that heavily taxes our limited brain resources, but has low semantic and hippocampal related long-term memory processing demands, has never been tested. We address this question by investigating whether consolidation could persist in parallel with an active, encoding-irrelevant, minimally semantic task, regardless of its high resource demands for cognitive processing. We distinguish the impact of such a task on consolidation based on whether it engages resources that are: (1) general/executive, or (2) specific/overlapping with the encoding modality. Our experiments compared subsequent memory performance across two post-encoding consolidation periods: quiet wakeful rest and a cognitively demanding n-Back task. Across six different experiments (total N = 176), we carefully manipulated the design of the n-Back task to target general or specific resources engaged in the ongoing consolidation process. In contrast to previous studies that employed interference tasks involving conceptual stimuli and complex processing demands, we did not find any differences between n-Back and rest conditions on memory performance at delayed test, using both recall and recognition tests. Our results indicate that: (1) quiet, wakeful rest is not a necessary prerequisite for episodic memory consolidation; and (2) post-encoding cognitive engagement does not interfere with memory consolidation when task-performance has minimal semantic and hippocampally-based episodic memory processing demands. We discuss our findings with reference to resource and reactivation-led interference theories
  • Acheson, D. J. (2013). Signatures of response conflict monitoring in language production. Procedia - Social and Behavioral Sciences, 94, 214-215. doi:10.1016/j.sbspro.2013.09.106.
  • Acheson, D. J., & Hagoort, P. (2013). Stimulating the brain's language network: Syntactic ambiguity resolution after TMS to the IFG and MTG. Journal of Cognitive Neuroscience, 25(10), 1664-1677. doi:10.1162/jocn_a_00430.

    Abstract

    The posterior middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) are two critical nodes of the brain's language network. Previous neuroimaging evidence has supported a dissociation in language comprehension in which parts of the MTG are involved in the retrieval of lexical syntactic information and the IFG is involved in unification operations that maintain, select, and integrate multiple sources of information over time. In the present investigation, we tested for causal evidence of this dissociation by modulating activity in IFG and MTG using an offline TMS procedure: continuous theta-burst stimulation. Lexical–syntactic retrieval was manipulated by using sentences with and without a temporarily word-class (noun/verb) ambiguity (e.g., run). In one group of participants, TMS was applied to the IFG and MTG, and in a control group, no TMS was applied. Eye movements were recorded and quantified at two critical sentence regions: a temporarily ambiguous region and a disambiguating region. Results show that stimulation of the IFG led to a modulation of the ambiguity effect (ambiguous–unambiguous) at the disambiguating sentence region in three measures: first fixation durations, total reading times, and regressive eye movements into the region. Both IFG and MTG stimulation modulated the ambiguity effect for total reading times in the temporarily ambiguous sentence region relative to a control group. The current results demonstrate that an offline repetitive TMS protocol can have influences at a different point in time during online processing and provide causal evidence for IFG involvement in unification operations during sentence comprehension.
  • Andics, A., McQueen, J. M., & Petersson, K. M. (2013). Mean-based neural coding of voices. NeuroImage, 79, 351-360. doi:10.1016/j.neuroimage.2013.05.002.

    Abstract

    The social significance of recognizing the person who talks to us is obvious, but the neural mechanisms that mediate talker identification are unclear. Regions along the bilateral superior temporal sulcus (STS) and the inferior frontal cortex (IFC) of the human brain are selective for voices, and they are sensitive to rapid voice changes. Although it has been proposed that voice recognition is supported by prototype-centered voice representations, the involvement of these category-selective cortical regions in the neural coding of such "mean voices" has not previously been demonstrated. Using fMRI in combination with a voice identity learning paradigm, we show that voice-selective regions are involved in the mean-based coding of voice identities. Voice typicality is encoded on a supra-individual level in the right STS along a stimulus-dependent, identity-independent (i.e., voice-acoustic) dimension, and on an intra-individual level in the right IFC along a stimulus-independent, identity-dependent (i.e., voice identity) dimension. Voice recognition therefore entails at least two anatomically separable stages, each characterized by neural mechanisms that reference the central tendencies of voice categories.
  • Asaridou, S. S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence for shared domain-general mechanisms. Frontiers in Psychology, 4: 321. doi:10.3389/fpsyg.2013.00321.

    Abstract

    Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
  • De Boer, M., Toni, I., & Willems, R. M. (2013). What drives successful verbal communication? Frontiers in Human Neuroscience, 7: 622. doi:10.3389/fnhum.2013.00622.

    Abstract

    There is a vast amount of potential mappings between behaviors and intentions in communication: a behavior can indicate a multitude of different intentions, and the same intention can be communicated with a variety of behaviors. Humans routinely solve these many-to-many referential problems when producing utterances for an Addressee. This ability might rely on social cognitive skills, for instance, the ability to manipulate unobservable summary variables to disambiguate ambiguous behavior of other agents (“mentalizing”) and the drive to invest resources into changing and understanding the mental state of other agents (“communicative motivation”). Alternatively, the ambiguities of verbal communicative interactions might be solved by general-purpose cognitive abilities that process cues that are incidentally associated with the communicative interaction. In this study, we assess these possibilities by testing which cognitive traits account for communicative success during a verbal referential task. Cognitive traits were assessed with psychometric scores quantifying motivation, mentalizing abilities, and general-purpose cognitive abilities, taxing abstract visuo-spatial abilities. Communicative abilities of participants were assessed by using an on-line interactive task that required a speaker to verbally convey a concept to an Addressee. The communicative success of the utterances was quantified by measuring how frequently a number of Evaluators would infer the correct concept. Speakers with high motivational and general-purpose cognitive abilities generated utterances that were more easily interpreted. These findings extend to the domain of verbal communication the notion that motivational and cognitive factors influence the human ability to rapidly converge on shared communicative innovations.
  • Campisi, E., & Ozyurek, A. (2013). Iconicity as a communicative strategy: Recipient design in multimodal demonstrations for adults and children. Journal of Pragmatics, 47, 14-27. doi:10.1016/j.pragma.2012.12.007.

    Abstract

    Humans are the only species that uses communication to teach new knowledge to novices, usually to children (Tomasello, 1999 and Csibra and Gergely, 2006). This context of communication can employ “demonstrations” and it takes place with or without the help of objects (Clark, 1996). Previous research has focused on understanding the nature of demonstrations for very young children and with objects involved. However, little is known about the strategies used in demonstrating an action to an older child in comparison to another adult and without the use of objects, i.e., with gestures only. We tested if during demonstration of an action speakers use different degrees of iconicity in gestures for a child compared to an adult. 18 Italian subjects described to a camera how to make coffee imagining the listener as a 12-year-old child, a novice or an expert adult. While speech was found more informative both for the novice adult and for the child compared to the expert adult, the rate of iconic gestures increased and they were more informative and bigger only for the child compared to both of the adult conditions. Iconicity in gestures can be a powerful communicative strategy in teaching new knowledge to children in demonstrations and this is in line with claims that it can be used as a scaffolding device in grounding knowledge in experience (Perniss et al., 2010).
  • Cappuccio, M. L., Chu, M., & Kita, S. (2013). Pointing as an instrumental gesture: Gaze representation through indication. Humana.Mente: Journal of Philosophical Studies, 24, 125-149.

    Abstract

    We call those gestures “instrumental” that can enhance certain thinking processes of an agent by offering him representational models of his actions in a virtual space of imaginary performative possibilities. We argue that pointing is an instrumental gesture in that it represents geometrical information on one’s own gaze direction (i.e., a spatial model for attentional/ocular fixation/orientation), and provides a ritualized template for initiating gaze coordination and joint attention. We counter two possible objections, asserting respectively that the representational content of pointing is not constitutive, but derived from language, and that pointing directly solicits gaze coordination, without representing it. We consider two studies suggesting that attention and spatial perception are actively modified by one’s own pointing activity: the first study shows that pointing gestures help children link sets of objects to their corresponding number words; the second, that adults are faster and more accurate in counting when they point.
  • Cristia, A., Dupoux, E., Hakuno, Y., Lloyd-Fox, S., Schuetze, M., Kivits, J., Bergvelt, T., Van Gelder, M., Filippin, L., Charron, S., & Minagawa-Kawai, Y. (2013). An online database of infant functional Near InfraRed Spectroscopy studies: A community-augmented systematic review. PLoS One, 8(3): e58906. doi:10.1371/journal.pone.0058906.

    Abstract

    Until recently, imaging the infant brain was very challenging. Functional Near InfraRed Spectroscopy (fNIRS) is a promising, relatively novel technique, whose use is rapidly expanding. As an emergent field, it is particularly important to share methodological knowledge to ensure replicable and robust results. In this paper, we present a community-augmented database which will facilitate precisely this exchange. We tabulated articles and theses reporting empirical fNIRS research carried out on infants below three years of age along several methodological variables. The resulting spreadsheet has been uploaded in a format allowing individuals to continue adding new results, and download the most recent version of the table. Thus, this database is ideal to carry out systematic reviews. We illustrate its academic utility by focusing on the factors affecting three key variables: infant attrition, the reliability of oxygenated and deoxygenated responses, and signal-to-noise ratios. We then discuss strengths and weaknesses of the DBIfNIRS, and conclude by suggesting a set of simple guidelines aimed to facilitate methodological convergence through the standardization of reports.
  • Cristia, A. (2013). Input to language: The phonetics of infant-directed speech. Language and Linguistics Compass, 7, 157-170. doi:10.1111/lnc3.12015.

    Abstract

    Over the first year of life, infant perception changes radically as the child learns the phonology of the ambient language from the speech she is exposed to. Since infant-directed speech attracts the child's attention more than other registers, it is necessary to describe that input in order to understand language development, and to address questions of learnability. In this review, evidence from corpora analyses, experimental studies, and observational paradigms is brought together to outline the first comprehensive empirical picture of infant-directed speech and its effects on language acquisition. The ensuing landscape suggests that infant-directed speech provides an emotionally and linguistically rich input to language acquisition

    Additional information

    Cristia_Suppl_Material.xls
  • Cristia, A., Mielke, J., Daland, R., & Peperkamp, S. (2013). Similarity in the generalization of implicitly learned sound patterns. Journal of Laboratory Phonology, 4(2), 259-285.

    Abstract

    A core property of language is the ability to generalize beyond observed examples. In two experiments, we explore how listeners generalize implicitly learned sound patterns to new nonwords and to new sounds, with the goal of shedding light on how similarity affects treatment of potential generalization targets. During the exposure phase, listeners heard nonwords whose onset consonant was restricted to a subset of a natural class (e.g., /d g v z Z/). During the test phase, listeners were presented with new nonwords and asked to judge how frequently they had been presented before; some of the test items began with a consonant from the exposure set (e.g., /d/), and some began with novel consonants with varying relations to the exposure set (e.g., /b/, which is highly similar to all onsets in the training set; /t/, which is highly similar to one of the training onsets; and /p/, which is less similar than the other two). The exposure onset was rated most frequent, indicating that participants encoded onset attestation in the exposure set, and generalized it to new nonwords. Participants also rated novel consonants as somewhat frequent, indicating generalization to onsets that did not occur in the exposure phase. While generalization could be accounted for in terms of featural distance, it was insensitive to natural class structure. Generalization to new sounds was predicted better by models requiring prior linguistic knowledge (either traditional distinctive features or articulatory phonetic information) than by a model based on a linguistically naïve measure of acoustic similarity.
  • Debreslioska, S., Ozyurek, A., Gullberg, M., & Perniss, P. M. (2013). Gestural viewpoint signals referent accessibility. Discourse Processes, 50(7), 431-456. doi:10.1080/0163853x.2013.824286.

    Abstract

    The tracking of entities in discourse is known to be a bimodal phenomenon. Speakers achieve cohesion in speech by alternating between full lexical forms, pronouns, and zero anaphora as they track referents. They also track referents in co-speech gestures. In this study, we explored how viewpoint is deployed in reference tracking, focusing on representations of animate entities in German narrative discourse. We found that gestural viewpoint systematically varies depending on discourse context. Speakers predominantly use character viewpoint in maintained contexts and observer viewpoint in reintroduced contexts. Thus, gestural viewpoint seems to function as a cohesive device in narrative discourse. The findings expand on and provide further evidence for the coordination between speech and gesture on the discourse level that is crucial to understanding the tight link between the two modalities.
  • Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2013). The thickness of musical pitch: Psychophysical evidence for linguistic relativity. Psychological Science, 24, 613-621. doi:10.1177/0956797612457374.

    Abstract

    Do people who speak different languages think differently, even when they are not using language? To find out, we used nonlinguistic psychophysical tasks to compare mental representations of musical pitch in native speakers of Dutch and Farsi. Dutch speakers describe pitches as high (hoog) or low (laag), whereas Farsi speakers describe pitches as thin (na-zok) or thick (koloft). Differences in language were reflected in differences in performance on two pitch-reproduction tasks, even though the tasks used simple, nonlinguistic stimuli and responses. To test whether experience using language influences mental representations of pitch, we trained native Dutch speakers to describe pitch in terms of thickness, as Farsi speakers do. After the training, Dutch speakers’ performance on a nonlinguistic psychophysical task resembled the performance of native Farsi speakers. People who use different linguistic space-pitch metaphors also think about pitch differently. Language can play a causal role in shaping nonlinguistic representations of musical pitch.

    Additional information

    DS_10.1177_0956797612457374.pdf
  • Eisner, F., Melinger, A., & Weber, A. (2013). Constraints on the transfer of perceptual learning in accented speech. Frontiers in Psychology, 4: 148. doi:10.3389/fpsyg.2013.00148.

    Abstract

    The perception of speech sounds can be re-tuned rapidly through a mechanism of lexically-driven learning (Norris et al 2003, Cogn.Psych. 47). Here we investigated this type of learning for English voiced stop consonants which are commonly de-voiced in word final position by Dutch learners of English . Specifically, this study asked under which conditions the change in pre-lexical representation encodes phonological information about the position of the critical sound within a word. After exposure to a Dutch learner’s productions of de-voiced stops in word-final position (but not in any other positions), British English listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with voiceless final stops (e.g., ‘seat’), facilitated recognition of visual targets with voiced final stops (e.g., SEED). This learning generalized to test pairs where the critical contrast was in word-initial position, e.g. auditory primes such as ‘town’ facilitated recognition of visual targets like DOWN (Experiment 1). Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). These results suggest that word position can be encoded in the pre-lexical adjustment to the accented phoneme contrast. Lexcially-guided feedback, distributional properties of the input, and long-term representations of accents all appear to modulate the pre-lexical re-tuning of phoneme categories.
  • Erb, J., Henry, M. J., Eisner, F., & Obleser, J. (2013). The brain dynamics of rapid perceptual adaptation to adverse listening conditions. The Journal of Neuroscience, 33, 10688-10697. doi:10.1523/​JNEUROSCI.4596-12.2013.

    Abstract

    Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an “executive” network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic “language” areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory–language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.
  • Gentner, D., Ozyurek, A., Gurcanli, O., & Goldin-Meadow, S. (2013). Spatial language facilitates spatial cognition: Evidence from children who lack language input. Cognition, 127, 318-330. doi:10.1016/j.cognition.2013.01.003.

    Abstract

    Does spatial language influence how people think about space? To address this question, we observed children who did not know a conventional language, and tested their performance on nonlinguistic spatial tasks. We studied deaf children living in Istanbul whose hearing losses prevented them from acquiring speech and whose hearing parents had not exposed them to sign. Lacking a conventional language, the children used gestures, called homesigns, to communicate. In Study 1, we asked whether homesigners used gesture to convey spatial relations, and found that they did not. In Study 2, we tested a new group of homesigners on a Spatial Mapping Task, and found that they performed significantly worse than hearing Turkish children who were matched to the deaf children on another cognitive task. The absence of spatial language thus went hand-in-hand with poor performance on the nonlinguistic spatial task, pointing to the importance of spatial language in thinking about space.
  • Gross, J., Baillet, S., Barnes, G. R., Henson, R. N., Hillebrand, A., Jensen, O., Jerbi, K., Litvak, V., Maess, B., Oostenveld, R., Parkkonen, L., Taylor, J. R., Van Wassenhove, V., Wibral, M., & Schoffelen, J.-M. (2013). Good practice for conducting and reporting MEG research. NeuroImage, 65, 349-363. doi:10.1016/j.neuroimage.2012.10.001.

    Abstract

    Magnetoencephalographic (MEG) recordings are a rich source of information about the neural dynamics underlying cognitive processes in the brain, with excellent temporal and good spatial resolution. In recent years there have been considerable advances in MEG hardware developments as well as methodological developments. Sophisticated analysis techniques are now routinely applied and continuously improved, leading to fascinating insights into the intricate dynamics of neural processes. However, the rapidly increasing level of complexity of the different steps in a MEG study make it difficult for novices, and sometimes even for experts, to stay aware of possible limitations and caveats. Furthermore, the complexity of MEG data acquisition and data analysis requires special attention when describing MEG studies in publications, in order to facilitate interpretation and reproduction of the results. This manuscript aims at making recommendations for a number of important data acquisition and data analysis steps and suggests details that should be specified in manuscripts reporting MEG studies. These recommendations will hopefully serve as guidelines that help to strengthen the position of the MEG research community within the field of neuroscience, and may foster discussion within the community in order to further enhance the quality and impact of MEG research.
  • Hagoort, P. (2013). MUC (Memory, Unification, Control) and beyond. Frontiers in Psychology, 4: 416. doi:10.3389/fpsyg.2013.00416.

    Abstract

    A neurobiological model of language is discussed that overcomes the shortcomings of the classical Wernicke-Lichtheim-Geschwind model. It is based on a subdivision of language processing into three components: Memory, Unification, and Control. The functional components as well as the neurobiological underpinnings of the model are discussed. In addition, the need for extension of the model beyond the classical core regions for language is shown. Attentional networks as well as networks for inferential processing are crucial to realize language comprehension beyond single word processing and beyond decoding propositional content. It is shown that this requires the dynamic interaction between multiple brain regions.
  • Hagoort, P., & Meyer, A. S. (2013). What belongs together goes together: the speaker-hearer perspective. A commentary on MacDonald's PDC account. Frontiers in Psychology, 4: 228. doi:10.3389/fpsyg.2013.00228.

    Abstract

    First paragraph:
    MacDonald (2013) proposes that distributional properties of language and processing biases in language comprehension can to a large extent be attributed to consequences of the language production process. In essence, the account is derived from the principle of least effort that was formulated by Zipf, among others (Zipf, 1949; Levelt, 2013). However, in Zipf's view the outcome of the least effort principle was a compromise between least effort for the speaker and least effort for the listener, whereas MacDonald puts most of the burden on the production process.
  • Holler, J., Turner, K., & Varcianna, T. (2013). It's on the tip of my fingers: Co-speech gestures during lexical retrieval in different social contexts. Language and Cognitive Processes, 28(10), 1509-1518. doi:10.1080/01690965.2012.698289.

    Abstract

    The Lexical Retrieval Hypothesis proposes that gestures function at the level of speech production, aiding in the retrieval of lexical items from the mental lexicon. However, empirical evidence for this account is mixed, and some critics argue that a more likely function of gestures during lexical retrieval is a communicative one. The present study was designed to test these predictions against each other by keeping lexical retrieval difficulty constant while varying social context. Participants' gestures were analysed during tip of the tongue experiences when communicating with a partner face-to-face (FTF), while being separated by a screen, or on their own by speaking into a voice recorder. The results show that participants in the FTF context produced significantly more representational gestures than participants in the solitary condition. This suggests that, even in the specific context of lexical retrieval difficulties, representational gestures appear to play predominantly a communicative role.

    Files private

    Request files
  • Kaltwasser, L., Ries, S., Sommer, W., Knight, R., & Willems, R. M. (2013). Independence of valence and reward in emotional word processing: Electrophysiological evidence. Frontiers in Psychology, 4: 168. doi:10.3389/fpsyg.2013.00168.

    Abstract

    Both emotion and reward are primary modulators of cognition: Emotional word content enhances word processing, and reward expectancy similarly amplifies cognitive processing from the perceptual up to the executive control level. Here, we investigate how these primary regulators of cognition interact. We studied how the anticipation of gain or loss modulates the neural time course (event-related potentials, ERPs) related to processing of emotional words. Participants performed a semantic categorization task on emotional and neutral words, which were preceded by a cue indicating that performance could lead to monetary gain or loss. Emotion-related and reward-related effects occurred in different time windows, did not interact statistically, and showed different topographies. This speaks for an independence of reward expectancy and the processing of emotional word content. Therefore, privileged processing given to emotionally valenced words seems immune to short-term modulation of reward. Models of language comprehension should be able to incorporate effects of reward and emotion on language processing, and the current study argues for an architecture in which reward and emotion do not share a common neurobiological mechanism
  • Kominsky, J. F., & Casasanto, D. (2013). Specific to whose body? Perspective taking and the spatial mapping of valence. Frontiers in Psychology, 4: 266. doi:10.3389/fpsyg.2013.00266.

    Abstract

    People tend to associate the abstract concepts of “good” and “bad” with their fluent and disfluent sides of space, as determined by their natural handedness or by experimental manipulation (Casasanto, 2011). Here we investigated influences of spatial perspective taking on the spatialization of “good” and “bad.” In the first experiment, participants indicated where a schematically drawn cartoon character would locate “good” and “bad” stimuli. Right-handers tended to assign “good” to the right and “bad” to the left side of egocentric space when the character shared their spatial perspective, but when the character was rotated 180° this spatial mapping was reversed: good was assigned to the character’s right side, not the participant’s. The tendency to spatialize valence from the character’s perspective was stronger in the second experiment, when participants were shown a full-featured photograph of the character. In a third experiment, most participants not only spatialized “good” and “bad” from the character’s perspective, they also based their judgments on a salient attribute of the character’s body (an injured hand) rather than their own body. Taking another’s spatial perspective encourages people to compute space-valence mappings using an allocentric frame of reference, based on the fluency with which the other person could perform motor actions with their right or left hand. When people reason from their own spatial perspective, their judgments depend, in part, on the specifics of their bodies; when people reason from someone else’s perspective, their judgments may depend on the specifics of the other person’s body, instead. - See more at: http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00266
  • Kooijman, V., Junge, C., Johnson, E. K., Hagoort, P., & Cutler, A. (2013). Predictive brain signals of linguistic development. Frontiers in Psychology, 4: 25. doi:10.3389/fpsyg.2013.00025.

    Abstract

    The ability to extract word forms from continuous speech is a prerequisite for constructing a vocabulary and emerges in the first year of life. Electrophysiological (ERP) studies of speech segmentation by 9- to 12-month-old listeners in several languages have found a left-localized negativity linked to word onset as a marker of word detection. We report an ERP study showing significant evidence of speech segmentation in Dutch-learning 7-month-olds. In contrast to the left-localized negative effect reported with older infants, the observed overall mean effect had a positive polarity. Inspection of individual results revealed two participant sub-groups: a majority showing a positive-going response, and a minority showing the left negativity observed in older age groups. We retested participants at age three, on vocabulary comprehension and word and sentence production. On every test, children who at 7 months had shown the negativity associated with segmentation of words from speech outperformed those who had produced positive-going brain responses to the same input. The earlier that infants show the left-localized brain responses typically indicating detection of words in speech, the better their early childhood language skills.
  • Kristensen, L. B., Wang, L., Petersson, K. M., & Hagoort, P. (2013). The interface between language and attention: Prosodic focus marking recruits a general attention network in spoken language comprehension. Cerebral Cortex, 23, 1836-1848. doi:10.1093/cercor/bhs164.

    Abstract

    In spoken language, pitch accent can mark certain information as focus, whereby more attentional resources are allocated to the focused information. Using functional magnetic resonance imaging, this study examined whether pitch accent, used for marking focus, recruited general attention networks during sentence comprehension. In a language task, we independently manipulated the prosody and semantic/pragmatic congruence of sentences. We found that semantic/pragmatic processing affected bilateral inferior and middle frontal gyrus. The prosody manipulation showed bilateral involvement of the superior/inferior parietal cortex, superior and middle temporal cortex, as well as inferior, middle, and posterior parts of the frontal cortex. We compared these regions with attention networks localized in an auditory spatial attention task. Both tasks activated bilateral superior/inferior parietal cortex, superior temporal cortex, and left precentral cortex. Furthermore, an interaction between prosody and congruence was observed in bilateral inferior parietal regions: for incongruent sentences, but not for congruent ones, there was a larger activation if the incongruent word carried a pitch accent, than if it did not. The common activations between the language task and the spatial attention task demonstrate that pitch accent activates a domain general attention network, which is sensitive to semantic/pragmatic aspects of language. Therefore, attention and language comprehension are highly interactive.

    Additional information

    Kirstensen_Cer_Cor_Suppl_Mat.doc
  • Lai, V. T., & Curran, T. (2013). ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors. Brain and Language, 127(3), 484-496. doi:10.1016/j.bandl.2013.09.010.

    Abstract

    Cognitive linguists suggest that understanding metaphors requires activation of conceptual mappings between the involved concepts. We tested whether mappings are indeed in use during metaphor comprehension, and what mapping means as a cognitive process with Event-Related Potentials. Participants read literal, conventional metaphorical, novel metaphorical, and anomalous target sentences preceded by primes with related or unrelated mappings. Experiment 1 used sentence-primes to activate related mappings, and Experiment 2 used simile-primes to induce comparison thinking. In the unprimed conditions of both experiments, metaphors elicited N400s more negative than the literals. In Experiment 1, related sentence-primes reduced the metaphor-literal N400 difference in conventional, but not in novel metaphors. In Experiment 2, related simile-primes reduced the metaphor-literal N400 difference in novel, but not clearly in conventional metaphors. We suggest that mapping as a process occurs in metaphors, and the ways in which it can be facilitated by comparison differ between conventional and novel metaphors.

    Additional information

    Lai_2013_supp.docx Erratum figure 1-4
  • Lai, J., & Poletiek, F. H. (2013). How “small” is “starting small” for learning hierarchical centre-embedded structures? Journal of Cognitive Psychology, 25, 423-435. doi:10.1080/20445911.2013.779247.

    Abstract

    Hierarchical centre-embedded structures pose a large difficulty for language learners due to their complexity. A recent artificial grammar learning study (Lai & Poletiek, 2011) demonstrated a starting-small (SS) effect, i.e., staged-input and sufficient exposure to 0-level-of-embedding exemplars were the critical conditions in learning AnBn structures. The current study aims to test: (1) a more sophisticated type of SS (a gradually rather than discretely growing input), and (2) the frequency distribution of the input. The results indicate that SS optimally works under other conditional cues, such as a skewed frequency distribution with simple stimuli being more numerous than complex ones.
  • Lai, V. T., & Boroditsky, L. (2013). The immediate and chronic influence of spatio-temporal metaphors on the mental representations of time in English, Mandarin, and Mandarin-English speakers. Frontiers in Psychology, 4: 142. doi:10.3389/fpsyg.2013.00142.

    Abstract

    In this paper we examine whether experience with spatial metaphors for time has an influence on people’s representation of time. In particular we ask whether spatiotemporal metaphors can have both chronic and immediate effects on temporal thinking. In Study 1, we examine the prevalence of ego-moving representations for time in Mandarin speakers, English speakers, and Mandarin-English (ME) bilinguals. As predicted by observations in linguistic analyses, we find that Mandarin speakers are less likely to take an ego-moving perspective than are English speakers. Further, we find that ME bilinguals tested in English are less likely to take an ego-moving perspective than are English monolinguals (an effect of L1 on meaning-making in L2), and also that ME bilinguals tested in Mandarin are more likely to take an ego-moving perspective than are Mandarin monolinguals (an effect of L2 on meaning-making in L1). These findings demonstrate that habits of metaphor use in one language can influence temporal reasoning in another language, suggesting the metaphors can have a chronic effect on patterns in thought. In Study 2 we test Mandarin speakers using either horizontal or vertical metaphors in the immediate context of the task. We find that Mandarin speakers are more likely to construct front-back representations of time when understanding front-back metaphors, and more likely to construct up-down representations of time when understanding up-down metaphors. These findings demonstrate that spatiotemporal metaphors can also have an immediate influence on temporal reasoning. Taken together, these findings demonstrate that the metaphors we use to talk about time have both immediate and long-term consequences for how we conceptualize and reason about this fundamental domain of experience.
  • Larson-Prior, L., Oostenveld, R., Della Penna, S., Michalareas, G., Prior, F., Babajani-Feremi, A., Schoffelen, J.-M., Marzetti, L., de Pasquale, F., Pompeo, F. D., Stout, J., Woolrich, M., Luo, Q., Bucholz, R., Fries, P., Pizzella, V., Romani, G., Corbetta, M., & Snyder, A. (2013). Adding dynamics to the Human Connectome Project with MEG. NeuroImage, 80, 190-201. doi:10.1016/j.neuroimage.2013.05.056.

    Abstract

    The Human Connectome Project (HCP) seeks to map the structural and functional connections between network elements in the human brain. Magnetoencephalography (MEG) provides a temporally rich source of information on brain network dynamics and represents one source of functional connectivity data to be provided by the HCP. High quality MEG data will be collected from 50 twin pairs both in the resting state and during performance of motor, working memory and language tasks. These data will be available to the general community. Additionally, using the cortical parcellation scheme common to all imaging modalities, the HCP will provide processing pipelines for calculating connection matrices as a function of time and frequency. Together with structural and functional data generated using magnetic resonance imaging methods, these data represent a unique opportunity to investigate brain network connectivity in a large cohort of normal adult human subjects. The analysis pipeline software and the dynamic connectivity matrices that it generates will all be made freely available to the research community.
  • Lüttjohann, A., Schoffelen, J.-M., & Van Luijtelaar, G. (2013). Peri-ictal network dynamics of spike-wave discharges: Phase and spectral characteristics. Experimental Neurology, 239, 235-247. doi:10.1016/j.expneurol.2012.10.021.

    Abstract

    Purpose The brain is a highly interconnected neuronal assembly in which network analyses can greatly enlarge our knowledge on seizure generation. The cortico-thalamo-cortical network is the brain-network of interest in absence epilepsy. Here, network synchronization is assessed in a genetic absence model during 5 second long pre-ictal- > ictal transition periods. Method 16 male WAG/Rij rats were equipped with multiple electrodes targeting layer 4 to 6 of the somatosensory-cortex, rostral and caudal RTN, VPM, anterior-(ATN) and posterior (Po) thalamic nucleus. Local Field Potentials measured during pre-ictal- > ictal transition and during control periods were subjected to time-frequency and pairwise phase consistency analysis. Results Pre-ictally, all channels showed Spike-Wave Discharge (SWD) precursor activity (increases in spectral power), which were earliest and most pronounced in the somatosensory cortex. The caudal RTN decoupled from VPM, Po and cortical layer 4. Strong increases in synchrony were found between cortex and thalamus during SWD. Although increases between cortex and VPM were seen in SWD frequencies and its harmonics, boarder spectral increases (6-48 Hz) were seen between cortex and Po. All thalamic nuclei showed increased phase synchronization with Po but not with VPM. Conclusion Absence seizures are not sudden and unpredictable phenomena: the somatosensory cortex shows highest and earliest precursor activity. The pre-ictal decoupling of the caudal RTN might be a prerequisite of SWD generation. Po nucleus might be the primary thalamic counterpart to the somatosensory-cortex in the generation of the cortico-thalamic-cortical oscillations referred to as SWD.
  • Mazzone, M., & Campisi, E. (2013). Distributed intentionality: A model of intentional behavior in humans. Philosophical Psychology, 26, 267-290. doi:10.1080/09515089.2011.641743.

    Abstract

    Is human behavior, and more specifically linguistic behavior, intentional? Some scholars have proposed that action is driven in a top-down manner by one single intention—i.e.,one single conscious goal. Others have argued that actions are mostly non-intentional,insofar as often the single goal driving an action is not consciously represented. We intend to claim that both alternatives are unsatisfactory; more specifically, we claim that actions are intentional, but intentionality is distributed across complex goal-directed representations of action, rather than concentrated in single intentions driving action in a top-down manner. These complex representations encompass a multiplicity of goals, together with other components which are not goals themselves, and are the result of a largely automatic dynamic of activation; such an automatic processing, however, does not preclude the involvement of conscious attention, shifting from one component to the other of the overall goal-directed representation.

    Files private

    Request files
  • Meyer, A. S., & Hagoort, P. (2013). What does it mean to predict one's own utterances? [Commentary on Pickering & Garrod]. Behavioral and Brain Sciences, 36, 367-368. doi:10.1017/S0140525X12002786.

    Abstract

    Many authors have recently highlighted the importance of prediction for language comprehension. Pickering & Garrod (P&G) are the first to propose a central role for prediction in language production. This is an intriguing idea, but it is not clear what it means for speakers to predict their own utterances, and how prediction during production can be empirically distinguished from production proper.
  • Minagawa-Kawai, Y., Cristia, A., Long, B., Vendelin, I., Hakuno, Y., Dutat, M., Filippin, L., Cabrol, D., & Dupoux, E. (2013). Insights on NIRS sensitivity from a cross-linguistic study on the emergence of phonological grammar. Frontiers in Psychology, 4: 170. doi:10.3389/fpsyg.2013.00170.

    Abstract

    Each language has a unique set of phonemic categories and phonotactic rules which determine permissible sound sequences in that language. Behavioral research demonstrates that one’s native language shapes the perception of both sound categories and sound sequences in adults, and neuroimaging results further indicate that the processing of native phonemes and phonotactics involves a left-dominant perisylvian brain network. Recent work using a novel technique, functional Near InfraRed Spectroscopy (NIRS), has suggested that a left-dominant network becomes evident toward the end of the first year of life as infants process phonemic contrasts. The present research project attempted to assess whether the same pattern would be seen for native phonotactics. We measured brain responses in Japanese- and French-learning infants to two contrasts: Abuna vs. Abna (a phonotactic contrast that is native in French, but not in Japanese) and Abuna vs. Abuuna (a vowel length contrast that is native in Japanese, but not in French). Results did not show a significant response to either contrast in either group, unlike both previous behavioral research on phonotactic processing and NIRS work on phonemic processing. To understand these null results, we performed similar NIRS experiments with Japanese adult participants. These data suggest that the infant null results arise from an interaction of multiple factors, involving the suitability of the experimental paradigm for NIRS measurements and stimulus perceptibility. We discuss the challenges facing this novel technique, particularly focusing on the optimal stimulus presentation which could yield strong enough hemodynamic responses when using the change detection paradigm.
  • Nieuwenhuis, I. L., Folia, V., Forkstam, C., Jensen, O., & Petersson, K. M. (2013). Sleep promotes the extraction of grammatical rules. PLoS One, 8(6): e65046. doi:10.1371/journal.pone.0065046.

    Abstract

    Grammar acquisition is a high level cognitive function that requires the extraction of complex rules. While it has been proposed that offline time might benefit this type of rule extraction, this remains to be tested. Here, we addressed this question using an artificial grammar learning paradigm. During a short-term memory cover task, eighty-one human participants were exposed to letter sequences generated according to an unknown artificial grammar. Following a time delay of 15 min, 12 h (wake or sleep) or 24 h, participants classified novel test sequences as Grammatical or Non-Grammatical. Previous behavioral and functional neuroimaging work has shown that classification can be guided by two distinct underlying processes: (1) the holistic abstraction of the underlying grammar rules and (2) the detection of sequence chunks that appear at varying frequencies during exposure. Here, we show that classification performance improved after sleep. Moreover, this improvement was due to an enhancement of rule abstraction, while the effect of chunk frequency was unaltered by sleep. These findings suggest that sleep plays a critical role in extracting complex structure from separate but related items during integrative memory processing. Our findings stress the importance of alternating periods of learning with sleep in settings in which complex information must be acquired.
  • Peeters, D., Dijkstra, T., & Grainger, J. (2013). The representation and processing of identical cognates by late bilinguals: RT and ERP effects. Journal of Memory and Language, 68, 315-332. doi:10.1016/j.jml.2012.12.003.

    Abstract

    Across the languages of a bilingual, translation equivalents can have the same orthographic form and shared meaning (e.g., TABLE in French and English). How such words, called orthographically identical cognates, are processed and represented in the bilingual brain is not well understood. In the present study, late French–English bilinguals processed such identical cognates and control words in an English lexical decision task. Both behavioral and electrophysiological data were collected. Reaction times to identical cognates were shorter than for non-cognate controls and depended on both English and French frequency. Cognates with a low English frequency showed a larger cognate advantage than those with a high English frequency. In addition, N400 amplitude was found to be sensitive to cognate status and both the English and French frequency of the cognate words. Theoretical consequences for the processing and representation of identical cognates are discussed.
  • Piai, V., Roelofs, A., Acheson, D. J., & Takashima, A. (2013). Attention for speaking: Neural substrates of general and specific mechanisms for monitoring and control. Frontiers in Human Neuroscience, 7: 832. doi:10.3389/fnhum.2013.00832.

    Abstract

    Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI); vocal color naming while ignoring distractors (Stroop); and manual object discrimination while ignoring spatial position (Simon task). All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex (ACC) that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus (STG). Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category) relative to incongruent (categorically related) and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the ACC, a region that is likely implementing domain-general attentional control.
  • Piai, V., Meyer, L., Schreuder, R., & Bastiaansen, M. C. M. (2013). Sit down and read on: Working memory and long-term memory in particle-verb processing. Brain and Language, 127(2), 296-306. doi:10.1016/j.bandl.2013.09.015.

    Abstract

    Particle verbs (e.g., look up) are lexical items for which particle and verb share a single lexical entry. Using event-related brain potentials, we examined working memory and long-term memory involvement in particle-verb processing. Dutch participants read sentences with head verbs that allow zero, two, or more than five particles to occur downstream. Additionally, sentences were presented for which the encountered particle was semantically plausible, semantically implausible, or forming a non-existing particle verb. An anterior negativity was observed at the verbs that potentially allow for a particle downstream relative to verbs that do not, possibly indexing storage of the verb until the dependency with its particle can be closed. Moreover, a graded N400 was found at the particle (smallest amplitude for plausible particles and largest for particles forming non-existing particle verbs), suggesting that lexical access to a shared lexical entry occurred at two separate time points.
  • Rommers, J., Dijkstra, T., & Bastiaansen, M. C. M. (2013). Context-dependent semantic processing in the human brain: Evidence from idiom comprehension. Journal of Cognitive Neuroscience, 25(5), 762-776. doi:10.1162/jocn_a_00337.

    Abstract

    Language comprehension involves activating word meanings and integrating them with the sentence context. This study examined whether these routines are carried out even when they are theoretically unnecessary, namely in the case of opaque idiomatic expressions, for which the literal word meanings are unrelated to the overall meaning of the expression. Predictable words in sentences were replaced by a semantically related or unrelated word. In literal sentences, this yielded previously established behavioral and electrophysiological signatures of semantic processing: semantic facilitation in lexical decision, a reduced N400 for semantically related relative to unrelated words, and a power increase in the gamma frequency band that was disrupted by semantic violations. However, the same manipulations in idioms yielded none of these effects. Instead, semantic violations elicited a late positivity in idioms. Moreover, gamma band power was lower in correct idioms than in correct literal sentences. It is argued that the brain's semantic expectancy and literal word meaning integration operations can, to some extent, be “switched off” when the context renders them unnecessary. Furthermore, the results lend support to models of idiom comprehension that involve unitary idiom representations.
  • Segaert, K., Kempen, G., Petersson, K. M., & Hagoort, P. (2013). Syntactic priming and the lexical boost effect during sentence production and sentence comprehension: An fMRI study. Brain and Language, 124, 174-183. doi:10.1016/j.bandl.2012.12.003.

    Abstract

    Behavioral syntactic priming effects during sentence comprehension are typically observed only if both the syntactic structure and lexical head are repeated. In contrast, during production syntactic priming occurs with structure repetition alone, but the effect is boosted by repetition of the lexical head. We used fMRI to investigate the neuronal correlates of syntactic priming and lexical boost effects during sentence production and comprehension. The critical measure was the magnitude of fMRI adaptation to repetition of sentences in active or passive voice, with or without verb repetition. In conditions with repeated verbs, we observed adaptation to structure repetition in the left IFG and MTG, for active and passive voice. However, in the absence of repeated verbs, adaptation occurred only for passive sentences. None of the fMRI adaptation effects yielded differential effects for production versus comprehension, suggesting that sentence comprehension and production are subserved by the same neuronal infrastructure for syntactic processing.

    Additional information

    Segaert_Supplementary_data_2013.docx
  • Segaert, K., Weber, K., De Lange, F., Petersson, K. M., & Hagoort, P. (2013). The suppression of repetition enhancement: A review of fMRI studies. Neuropsychologia, 51, 59-66. doi:10.1016/j.neuropsychologia.2012.11.006.

    Abstract

    Repetition suppression in fMRI studies is generally thought to underlie behavioural facilitation effects (i.e., priming) and it is often used to identify the neuronal representations associated with a stimulus. However, this pays little heed to the large number of repetition enhancement effects observed under similar conditions. In this review, we identify several cognitive variables biasing repetition effects in the BOLD response towards enhancement instead of suppression. These variables are stimulus recognition, learning, attention, expectation and explicit memory. We also evaluate which models can account for these repetition effects and come to the conclusion that there is no one single model that is able to embrace all repetition enhancement effects. Accumulation, novel network formation as well as predictive coding models can all explain subsets of repetition enhancement effects.
  • Stolk, A., Verhagen, L., Schoffelen, J.-M., Oostenveld, R., Blokpoel, M., Hagoort, P., van Rooij, I., & Tonia, I. (2013). Neural mechanisms of communicative innovation. Proceedings of the National Academy of Sciences of the United States of America, 110(36), 14574-14579. doi:10.1073/pnas.1303170110.

    Abstract

    Human referential communication is often thought as coding-decoding a set of symbols, neglecting that establishing shared meanings requires a computational mechanism powerful enough to mutually negotiate them. Sharing the meaning of a novel symbol might rely on similar conceptual inferences across communicators or on statistical similarities in their sensorimotor behaviors. Using magnetoencephalography, we assess spectral, temporal, and spatial characteristics of neural activity evoked when people generate and understand novel shared symbols during live communicative interactions. Solving those communicative problems induced comparable changes in the spectral profile of neural activity of both communicators and addressees. This shared neuronal up-regulation was spatially localized to the right temporal lobe and the ventromedial prefrontal cortex and emerged already before the occurrence of a specific communicative problem. Communicative innovation relies on neuronal computations that are shared across generating and understanding novel shared symbols, operating over temporal scales independent from transient sensorimotor behavior.
  • Stolk, A., Todorovic, A., Schoffelen, J.-M., & Oostenveld, R. (2013). Online and offline tools for head movement compensation in MEG. NeuroImage, 68, 39-48. doi:10.1016/j.neuroimage.2012.11.047.

    Abstract

    Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments.
  • Tsuji, S., & Cristia, A. (2013). Fifty years of infant vowel discrimination research: What have we learned? Journal of the Phonetic Society of Japan, 17(3), 1-11.
  • Van Berkum, J. J. A., De Goede, D., Van Alphen, P. M., Mulder, E. R., & Kerstholt, J. H. (2013). How robust is the language architecture? The case of mood. Frontiers in Psychology, 4: 505. doi:10.3389/fpsyg.2013.00505.

    Abstract

    In neurocognitive research on language, the processing principles of the system at hand are usually assumed to be relatively invariant. However, research on attention, memory, decision-making, and social judgment has shown that mood can substantially modulate how the brain processes information. For example, in a bad mood, people typically have a narrower focus of attention and rely less on heuristics. In the face of such pervasive mood effects elsewhere in the brain, it seems unlikely that language processing would remain untouched. In an EEG experiment, we manipulated the mood of participants just before they read texts that confirmed or disconfirmed verb-based expectations about who would be talked about next (e.g., that “David praised Linda because … ” would continue about Linda, not David), or that respected or violated a syntactic agreement rule (e.g., “The boys turns”). ERPs showed that mood had little effect on syntactic parsing, but did substantially affect referential anticipation: whereas readers anticipated information about a specific person when they were in a good mood, a bad mood completely abolished such anticipation. A behavioral follow-up experiment suggested that a bad mood did not interfere with verb-based expectations per se, but prevented readers from using that information rapidly enough to predict upcoming reference on the fly, as the sentence unfolds. In all, our results reveal that background mood, a rather unobtrusive affective state, selectively changes a crucial aspect of real-time language processing. This observation fits well with other observed interactions between language processing and affect (emotions, preferences, attitudes, mood), and more generally testifies to the importance of studying “cold” cognitive functions in relation to “hot” aspects of the brain.
  • Van Leeuwen, T. M., Hagoort, P., & Händel, B. F. (2013). Real color captures attention and overrides spatial cues in grapheme-color synesthetes but not in controls. Neuropsychologia, 51(10), 1802-1813. doi:10.1016/j.neuropsychologia.2013.06.024.

    Abstract

    Grapheme-color synesthetes perceive color when reading letters or digits. We investigated oscillatory brain signals of synesthetes vs. controls using magnetoencephalography. Brain oscillations specifically in the alpha band (∼10 Hz) have two interesting features: alpha has been linked to inhibitory processes and can act as a marker for attention. The possible role of reduced inhibition as an underlying cause of synesthesia, as well as the precise role of attention in synesthesia is widely discussed. To assess alpha power effects due to synesthesia, synesthetes as well as matched controls viewed synesthesia-inducing graphemes, colored control graphemes, and non-colored control graphemes while brain activity was recorded. Subjects had to report a color change at the end of each trial which allowed us to assess the strength of synesthesia in each synesthete. Since color (synesthetic or real) might allocate attention we also included an attentional cue in our paradigm which could direct covert attention. In controls the attentional cue always caused a lateralization of alpha power with a contralateral decrease and ipsilateral alpha increase over occipital sensors. In synesthetes, however, the influence of the cue was overruled by color: independent of the attentional cue, alpha power decreased contralateral to the color (synesthetic or real). This indicates that in synesthetes color guides attention. This was confirmed by reaction time effects due to color, i.e. faster RTs for the color side independent of the cue. Finally, the stronger the observed color dependent alpha lateralization, the stronger was the manifestation of synesthesia as measured by congruency effects of synesthetic colors on RTs. Behavioral and imaging results indicate that color induces a location-specific, automatic shift of attention towards color in synesthetes but not in controls. We hypothesize that this mechanism can facilitate coupling of grapheme and color during the development of synesthesia.
  • Wagensveld, B., Segers, E., Van Alphen, P. M., & Verhoeven, L. (2013). The role of lexical representations and phonological overlap in rhyme judgments of beginning, intermediate and advanced readers. Learning and Individual Differences, 23, 64-71. doi:10.1016/j.lindif.2012.09.007.

    Abstract

    Studies have shown that prereaders find globally similar non-rhyming pairs (i.e., bell–ball) difficult to judge. Although this effect has been explained as a result of ill-defined lexical representations, others have suggested that it is part of an innate tendency to respond to phonological overlap. In the present study we examined this effect over time. Beginning, intermediate and advanced readers were presented with a rhyme judgment task containing rhyming, phonologically similar, and unrelated non-rhyming pairs. To examine the role of lexical representations, participants were presented with both words and pseudowords. Outcomes showed that pseudoword processing was difficult for children but not for adults. The global similarity effect was present in both children and adults. The findings imply that holistic representations cannot explain the incapacity to ignore similarity relations during rhyming. Instead, the data provide more evidence for the idea that global similarity processing is part of a more fundamental innate phonological processing capacity.
  • Wagensveld, B., Van Alphen, P. M., Segers, E., Hagoort, P., & Verhoeven, L. (2013). The neural correlates of rhyme awareness in preliterate and literate children. Clinical Neurophysiology, 124, 1336-1345. doi:10.1016/j.clinph.2013.01.022.

    Abstract

    Objective Most rhyme awareness assessments do not encompass measures of the global similarity effect (i.e., children who are able to perform simple rhyme judgments get confused when presented with globally similar non-rhyming pairs). The present study examines the neural nature of this effect by studying the N450 rhyme effect. Methods Behavioral and electrophysiological responses of Dutch pre-literate kindergartners and literate second graders were recorded while they made rhyme judgments of word pairs in three conditions; phonologically rhyming (e.g., wijn-pijn), overlapping non-rhyming (e.g., pen-pijn) and unrelated non-rhyming pairs (e.g., boom-pijn). Results Behaviorally, both groups had difficulty judging overlapping but not rhyming and unrelated pairs. The neural data of second graders showed overlapping pairs were processed in a similar fashion as unrelated pairs; both showed a more negative deflection of the N450 component than rhyming items. Kindergartners did not show a typical N450 rhyme effect. However, some other interesting ERP differences were observed, indicating preliterates are sensitive to rhyme at a certain level. Significance Rhyme judgments of globally similar items rely on the same process as rhyme judgments of rhyming and unrelated items. Therefore, incorporating a globally similar condition in rhyme assessments may lead to a more in-depth measure of early phonological awareness skills. Highlights Behavioral and electrophysiological responses were recorded while (pre)literate children made rhyme judgments of rhyming, overlapping and unrelated words. Behaviorally both groups had difficulty judging overlapping pairs as non-rhyming while overlapping and unrelated neural patterns were similar in literates. Preliterates show a different pattern indicating a developing phonological system.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2013). ERP evidence on the interaction between information structure and emotional salience of words. Cognitive, Affective and Behavioral Neuroscience, 13, 297-310. doi:10.3758/s13415-012-0146-2.

    Abstract

    Both emotional words and words focused by information structure can capture attention. This study examined the interplay between emotional salience and information structure in modulating attentional resources in the service of integrating emotional words into sentence context. Event-related potentials (ERPs) to affectively negative, neutral, and positive words, which were either focused or nonfocused in question–answer pairs, were evaluated during sentence comprehension. The results revealed an early negative effect (90–200 ms), a P2 effect, as well as an effect in the N400 time window, for both emotional salience and information structure. Moreover, an interaction between emotional salience and information structure occurred within the N400 time window over right posterior electrodes, showing that information structure influences the semantic integration only for neutral words, but not for emotional words. This might reflect the fact that the linguistic salience of emotional words can override the effect of information structure on the integration of words into context. The interaction provides evidence for attention–emotion interactions at a later stage of processing. In addition, the absence of interaction in the early time window suggests that the processing of emotional information is highly automatic and independent of context. The results suggest independent attention capture systems of emotional salience and information structure at the early stage but an interaction between them at a later stage, during the semantic integration of words.
  • Wang, L., Zhu, Z., Bastiaansen, M. C. M., Hagoort, P., & Yang, Y. (2013). Recognizing the emotional valence of names: An ERP study. Brain and Language, 125, 118-127. doi:10.1016/j.bandl.2013.01.006.

    Abstract

    Unlike common nouns, person names refer to unique entities and generally have a referring function. We used event-related potentials to investigate the time course of identifying the emotional meaning of nouns and names. The emotional valence of names and nouns were manipulated separately. The results show early N1 effects in response to emotional valence only for nouns. This might reflect automatic attention directed towards emotional stimuli. The absence of such an effect for names supports the notion that the emotional meaning carried by names is accessed after word recognition and person identification. In addition, both names with negative valence and emotional nouns elicited late positive effects, which have been associated with evaluation of emotional significance. This positive effect started earlier for nouns than for names, but with similar durations. Our results suggest that distinct neural systems are involved in the retrieval of names’ and nouns’ emotional meaning.
  • Wang, L., & Chu, M. (2013). The role of beat gesture and pitch accent in semantic processing: An ERP study. Neuropsychologia, 51(13), 2847-2855. doi:10.1016/j.neuropsychologia.2013.09.027.

    Abstract

    The present study investigated whether and how beat gesture (small baton-like hand movements used to emphasize information in speech) influences semantic processing as well as its interaction with pitch accent during speech comprehension. Event-related potentials were recorded as participants watched videos of a person gesturing and speaking simultaneously. The critical words in the spoken sentences were accompanied by a beat gesture, a control hand movement, or no hand movement, and were expressed either with or without pitch accent. We found that both beat gesture and control hand movement induced smaller negativities in the N400 time window than when no hand movement was presented. The reduced N400s indicate that both beat gesture and control movement facilitated the semantic integration of the critical word into the sentence context. In addition, the words accompanied by beat gesture elicited smaller negativities in the N400 time window than those accompanied by control hand movement over right posterior electrodes, suggesting that beat gesture has a unique role for enhancing semantic processing during speech comprehension. Finally, no interaction was observed between beat gesture and pitch accent, indicating that they affect semantic processing independently.
  • Whitmarsh, S., Udden, J., Barendregt, H., & Petersson, K. M. (2013). Mindfulness reduces habitual responding based on implicit knowledge: Evidence from artificial grammar learning. Consciousness and Cognition, (3), 833-845. doi:10.1016/j.concog.2013.05.007.

    Abstract

    Participants were unknowingly exposed to complex regularities in a working memory task. The existence of implicit knowledge was subsequently inferred from a preference for stimuli with similar grammatical regularities. Several affective traits have been shown to influence
    AGL performance positively, many of which are related to a tendency for automatic responding. We therefore tested whether the mindfulness trait predicted a reduction of grammatically congruent preferences, and used emotional primes to explore the influence of affect. Mindfulness was shown to correlate negatively with grammatically congruent responses. Negative primes were shown to result in faster and more negative evaluations.
    We conclude that grammatically congruent preference ratings rely on habitual responses, and that our findings provide empirical evidence for the non-reactive disposition of the mindfulness trait.
  • Willems, R. M. (2013). Can literary studies contribute to cognitive neuroscience? Journal of literary semantics, 42(2), 217-222. doi:10.1515/jls-2013-0011.

Share this page