Publications

Displaying 1 - 100 of 125
  • Bakker-Marshall, I., Takashima, A., Schoffelen, J.-M., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2018). Theta-band Oscillations in the Middle Temporal Gyrus Reflect Novel Word Consolidation. Journal of Cognitive Neuroscience, 30(5), 621-633. doi:10.1162/jocn_a_01240.

    Abstract

    Like many other types of memory formation, novel word learning benefits from an offline consolidation period after the initial encoding phase. A previous EEG study has shown that retrieval of novel words elicited more word-like-induced electrophysiological brain activity in the theta band after consolidation [Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of Cognitive Neuroscience, 27, 1286–1297, 2015]. This suggests that theta-band oscillations play a role in lexicalization, but it has not been demonstrated that this effect is directly caused by the formation of lexical representations. This study used magnetoencephalography to localize the theta consolidation effect to the left posterior middle temporal gyrus (pMTG), a region known to be involved in lexical storage. Both untrained novel words and words learned immediately before test elicited lower theta power during retrieval than existing words in this region. After a 24-hr consolidation period, the difference between novel and existing words decreased significantly, most strongly in the left pMTG. The magnitude of the decrease after consolidation correlated with an increase in behavioral competition effects between novel words and existing words with similar spelling, reflecting functional integration into the mental lexicon. These results thus provide new evidence that consolidation aids the development of lexical representations mediated by the left pMTG. Theta synchronization may enable lexical access by facilitating the simultaneous activation of distributed semantic, phonological, and orthographic representations that are bound together in the pMTG.
  • Berkers, R. M. W. J., Ekman, M., van Dongen, E. V., Takashima, A., Barth, M., Paller, K. A., & Fernández, G. (2018). Cued reactivation during slow-wave sleep induces brain connectivity changes related to memory stabilization. Scientific Reports, 8: 16958. doi:10.1038/s41598-018-35287-6.

    Abstract

    Memory reprocessing following acquisition enhances memory consolidation. Specifically, neural activity during encoding is thought to be ‘replayed’ during subsequent slow-wave sleep. Such memory replay is thought to contribute to the functional reorganization of neural memory traces. In particular, memory replay may facilitate the exchange of information across brain regions by inducing a reconfiguration of connectivity across the brain. Memory reactivation can be induced by external cues through a procedure known as “targeted memory reactivation”. Here, we analysed data from a published study with auditory cues used to reactivate visual object-location memories during slow-wave sleep. We characterized effects of memory reactivation on brain network connectivity using graph-theory. We found that cue presentation during slow-wave sleep increased global network integration of occipital cortex, a visual region that was also active during retrieval of object locations. Although cueing did not have an overall beneficial effect on the retention of cued versus uncued associations, individual differences in overnight memory stabilization were related to enhanced network integration of occipital cortex. Furthermore, occipital cortex displayed enhanced connectivity with mnemonic regions, namely the hippocampus, parahippocampal gyrus, thalamus and medial prefrontal cortex during cue sound presentation. Together, these results suggest a neural mechanism where cue-induced replay during sleep increases integration of task-relevant perceptual regions with mnemonic regions. This cross-regional integration may be instrumental for the consolidation and long-term storage of enduring memories.

    Additional information

    41598_2018_35287_MOESM1_ESM.doc
  • Dai, B., Chen, C., Long, Y., Zheng, L., Zhao, H., Bai, X., Liu, W., Zhang, Y., Liu, L., Guo, T., Ding, G., & Lu, C. (2018). Neural mechanisms for selectively tuning into the target speaker in a naturalistic noisy situation. Nature Communications, 9: 2405. doi:10.1038/s41467-018-04819-z.

    Abstract

    The neural mechanism for selectively tuning in to a target speaker while tuning out the others in a multi-speaker situation (i.e., the cocktail-party effect) remains elusive. Here we addressed this issue by measuring brain activity simultaneously from a listener and from multiple speakers while they were involved in naturalistic conversations. Results consistently show selectively enhanced interpersonal neural synchronization (INS) between the listener and the attended speaker at left temporal–parietal junction, compared with that between the listener and the unattended speaker across different multi-speaker situations. Moreover, INS increases significantly prior to the occurrence of verbal responses, and even when the listener’s brain activity precedes that of the speaker. The INS increase is independent of brain-to-speech synchronization in both the anatomical location and frequency range. These findings suggest that INS underlies the selective process in a multi-speaker situation through neural predictions at the content level but not the sensory level of speech.

    Additional information

    Dai_etal_2018_sup.pdf
  • Degand, L., & Van Bergen, G. (2018). Discourse markers as turn-transition devices: Evidence from speech and instant messaging. Discourse Processes, 55, 47-71. doi:10.1080/0163853X.2016.1198136.

    Abstract

    In this article we investigate the relation between discourse markers and turn-transition strategies in face-to-face conversations and Instant Messaging (IM), that is, unplanned, real-time, text-based, computer-mediated communication. By means of a quantitative corpus study of utterances containing a discourse marker, we show that utterance-final discourse markers are used more often in IM than in face-to-face conversations. Moreover, utterance-final discourse markers are shown to occur more often at points of turn-transition compared with points of turn-maintenance in both types of conversation. From our results we conclude that the discourse markers in utterance-final position can function as a turn-transition mechanism, signaling that the turn is over and the floor is open to the hearer. We argue that this linguistic turn-taking strategy is essentially similar in face-to-face and IM communication. Our results add to the evidence that communication in IM is more like speech than like writing.
  • Duarte, R., Uhlmann, M., Van den Broek, D., Fitz, H., Petersson, K. M., & Morrison, A. (2018). Encoding symbolic sequences with spiking neural reservoirs. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN). doi:10.1109/IJCNN.2018.8489114.

    Abstract

    Biologically inspired spiking networks are an important tool to study the nature of computation and cognition in neural systems. In this work, we investigate the representational capacity of spiking networks engaged in an identity mapping task. We compare two schemes for encoding symbolic input, one in which input is injected as a direct current and one where input is delivered as a spatio-temporal spike pattern. We test the ability of networks to discriminate their input as a function of the number of distinct input symbols. We also compare performance using either membrane potentials or filtered spike trains as state variable. Furthermore, we investigate how the circuit behavior depends on the balance between excitation and inhibition, and the degree of synchrony and regularity in its internal dynamics. Finally, we compare different linear methods of decoding population activity onto desired target labels. Overall, our results suggest that even this simple mapping task is strongly influenced by design choices on input encoding, state-variables, circuit characteristics and decoding methods, and these factors can interact in complex ways. This work highlights the importance of constraining computational network models of behavior by available neurobiological evidence.
  • Eekhof, L. S., Eerland, A., & Willems, R. M. (2018). Readers’ insensitivity to tense revealed: No differences in mental simulation during reading of present and past tense stories. Collabra: Psychology, 4(1): 16. doi:10.1525/collabra.121.

    Abstract

    While the importance of mental simulation during literary reading has long been recognized, we know little about the factors that determine when, what, and how much readers mentally simulate. Here we investigate the influence of a specific text characteristic, namely verb tense (present vs. past), on mental simulation during literary reading. Verbs usually denote the actions and events that take place in narratives and hence it is hypothesized that verb tense will influence the amount of mental simulation elicited in readers. Although the present tense is traditionally considered to be more “vivid”, this study is one of the first to experimentally assess this claim. We recorded eye-movements while subjects read stories in the past or present tense and collected data regarding self-reported levels of mental simulation, transportation and appreciation. We found no influence of tense on any of the offline measures. The eye-tracking data showed a slightly more complex pattern. Although we did not find a main effect of sensorimotor simulation content on reading times, we were able to link the degree to which subjects slowed down when reading simulation eliciting content to offline measures of attention and transportation, but this effect did not interact with the tense of the story. Unexpectedly, we found a main effect of tense on reading times per word, with past tense stories eliciting longer first fixation durations and gaze durations. However, we were unable to link this effect to any of the offline measures. In sum, this study suggests that tense does not play a substantial role in the process of mental simulation elicited by literary stories.

    Additional information

    Data Accessibility
  • Eichert, N., Peeters, D., & Hagoort, P. (2018). Language-driven anticipatory eye movements in virtual reality. Behavior Research Methods, 50(3), 1102-1115. doi:10.3758/s13428-017-0929-z.

    Abstract

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. The use of this variant of the visual world paradigm has shown that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional (2D) stimuli that are mere abstractions of real world objects. Here we present a visual world paradigm study in a three-dimensional (3D) immersive virtual reality environment. Despite significant changes in the stimulus material and the different mode of stimulus presentation, language-mediated anticipatory eye movements were observed. These findings thus indicate prediction of upcoming words in language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eye-tracking in rich and multimodal 3D virtual environments.

    Additional information

    13428_2017_929_MOESM1_ESM.docx
  • Ergin, R., Meir, I., Ilkbasaran, D., Padden, C., & Jackendoff, R. (2018). The Development of Argument Structure in Central Taurus Sign Language. Sign Language & Linguistics, 18(4), 612-639. doi:10.1353/sls.2018.0018.

    Abstract

    One of the fundamental issues for a language is its capacity to express
    argument structure unambiguously. This study presents evidence
    for the emergence and the incremental development of these
    basic mechanisms in a newly developing language, Central Taurus
    Sign Language. Our analyses identify universal patterns in both the
    emergence and development of these mechanisms and in languagespecific
    trajectories.
  • Ergin, R., Senghas, A., Jackendoff, R., & Gleitman, L. (2018). Structural cues for symmetry, asymmetry, and non-symmetry in Central Taurus Sign Language. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 104-106). Toruń, Poland: NCU Press. doi:10.12775/3991-1.025.
  • Flecken, M., & Von Stutterheim, C. (2018). Sprache und Kognition: Sprachvergleichende und lernersprachliche Untersuchungen zur Ereigniskonzeptualisierung. In S. Schimke, & H. Hopp (Eds.), Sprachverarbeitung im Zweitspracherwerb (pp. 325-356). Berlin: De Gruyter. doi:10.1515/9783110456356-014.
  • Francisco, A. A., Takashima, A., McQueen, J. M., Van den Bunt, M., Jesse, A., & Groen, M. A. (2018). Adult dyslexic readers benefit less from visual input during audiovisual speech processing: fMRI evidence. Neuropsychologia, 117, 454-471. doi:10.1016/j.neuropsychologia.2018.07.009.

    Abstract

    The aim of the present fMRI study was to investigate whether typical and dyslexic adult readers differed in the neural correlates of audiovisual speech processing. We tested for Blood Oxygen-Level Dependent (BOLD) activity differences between these two groups in a 1-back task, as they processed written (word, illegal consonant strings) and spoken (auditory, visual and audiovisual) stimuli. When processing written stimuli, dyslexic readers showed reduced activity in the supramarginal gyrus, a region suggested to play an important role in phonological processing, but only when they processed strings of consonants, not when they read words. During the speech perception tasks, dyslexic readers were only slower than typical readers in their behavioral responses in the visual speech condition. Additionally, dyslexic readers presented reduced neural activation in the auditory, the visual, and the audiovisual speech conditions. The groups also differed in terms of superadditivity, with dyslexic readers showing decreased neural activation in the regions of interest. An additional analysis focusing on vision-related processing during the audiovisual condition showed diminished activation for the dyslexic readers in a fusiform gyrus cluster. Our results thus suggest that there are differences in audiovisual speech processing between dyslexic and normal readers. These differences might be explained by difficulties in processing the unisensory components of audiovisual speech, more specifically, dyslexic readers may benefit less from visual information during audiovisual speech processing than typical readers. Given that visual speech processing supports the development of phonological skills fundamental in reading, differences in processing of visual speech could contribute to differences in reading ability between typical and dyslexic readers.
  • Franken, M. K. (2018). Listening for speaking: Investigations of the relationship between speech perception and production. PhD Thesis, Radboud University, Nijmegen.

    Abstract

    Speaking and listening are complex tasks that we perform on a daily basis, almost without conscious effort. Interestingly, speaking almost never occurs without listening: whenever we speak, we at least hear our own speech. The research in this thesis is concerned with how the perception of our own speech influences our speaking behavior. We show that unconsciously, we actively monitor this auditory feedback of our own speech. This way, we can efficiently take action and adapt articulation when an error occurs and auditory feedback does not correspond to our expectation. Processing the auditory feedback of our speech does not, however, automatically affect speech production. It is subject to a number of constraints. For example, we do not just track auditory feedback, but also its consistency. If auditory feedback is more consistent over time, it has a stronger influence on speech production. In addition, we investigated how auditory feedback during speech is processed in the brain, using magnetoencephalography (MEG). The results suggest the involvement of a broad cortical network including both auditory and motor-related regions. This is consistent with the view that the auditory center of the brain is involved in comparing auditory feedback to our expectation of auditory feedback. If this comparison yields a mismatch, motor-related regions of the brain can be recruited to alter the ongoing articulations.

    Additional information

    full text via Radboud Repository
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Psychonomic Bulletin & Review, 25(4), 1458-1467. doi:10.3758/s13423-018-1494-x.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some speakers follow the perturbation. In the current study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. This effect and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2018). Self-monitoring in the cerebral cortex: Neural responses to pitch-perturbed auditory feedback during speech production. NeuroImage, 179, 326-336. doi:10.1016/j.neuroimage.2018.06.061.

    Abstract

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.
  • De Groot, A. M. B., & Hagoort, P. (Eds.). (2018). Research methods in psycholinguistics and the neurobiology of language: A practical guide. Oxford: Wiley.
  • Hagoort, P. (2018). Prerequisites for an evolutionary stance on the neurobiology of language. Current Opinion in Behavioral Sciences, 21, 191-194. doi:10.1016/j.cobeha.2018.05.012.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2018). Infants' sensitivity to rhyme in songs. Infant Behavior and Development, 52, 130-139. doi:10.1016/j.infbeh.2018.07.002.

    Abstract

    Children’s songs often contain rhyming words at phrase endings. In this study, we investigated whether infants can already recognize this phonological pattern in songs. Earlier studies using lists of spoken words were equivocal on infants’ spontaneous processing of rhymes (Hayes, Slater, & Brown, 2000; Jusczyk, Goodman, & Baumann, 1999). Songs, however, constitute an ecologically valid rhyming stimulus, which could allow for spontaneous processing of this phonological pattern in infants. Novel children’s songs with rhyming and non-rhyming lyrics using pseudo-words were presented to 35 9-month-old Dutch infants using the Headturn Preference Procedure. Infants on average listened longer to the non-rhyming songs, with around half of the infants however exhibiting a preference for the rhyming songs. These results highlight that infants have the processing abilities to benefit from their natural rhyming input for the development of their phonological abilities.
  • Hasson, U., Egidi, G., Marelli, M., & Willems, R. M. (2018). Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension. Cognition, 180(1), 135-157. doi:10.1016/j.cognition.2018.06.018.

    Abstract

    Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
  • Hervais-Adelman, A., Egorova, N., & Golestani, N. (2018). Beyond bilingualism: Multilingual experience correlates with caudate volume. Brain Structure and Function, 223(7), 3495-3502. doi:10.1007/s00429-018-1695-0.

    Abstract

    The multilingual brain implements mechanisms that serve to select the appropriate language as a function of the communicative environment. Engaging these mechanisms on a regular basis appears to have consequences for brain structure and function. Studies have implicated the caudate nuclei as important nodes in polyglot language control processes, and have also shown structural differences in the caudate nuclei in bilingual compared to monolingual populations. However, the majority of published work has focused on the categorical differences between monolingual and bilingual individuals, and little is known about whether these findings extend to multilingual individuals, who have even greater language control demands. In the present paper, we present an analysis of the volume and morphology of the caudate nuclei, putamen, pallidum and thalami in 75 multilingual individuals who speak three or more languages. Volumetric analyses revealed a significant relationship between multilingual experience and right caudate volume, as well as a marginally significant relationship with left caudate volume. Vertex-wise analyses revealed a significant enlargement of dorsal and anterior portions of the left caudate nucleus, known to have connectivity with executive brain regions, as a function of multilingual expertise. These results suggest that multilingual expertise might exercise a continuous impact on brain structure, and that as additional languages beyond a second are acquired, the additional demands for linguistic and cognitive control result in modifications to brain structures associated with language management processes.
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2018). Commentary: Broca pars triangularis constitutes a “hub” of the language-control network during simultaneous language translation. Frontiers in Human Neuroscience, 12: 22. doi:10.3389/fnhum.2018.00022.

    Abstract

    A commentary on
    Broca Pars Triangularis Constitutes a “Hub” of the Language-Control Network during Simultaneous Language Translation

    by Elmer, S. (2016). Front. Hum. Neurosci. 10:491. doi: 10.3389/fnhum.2016.00491

    Elmer (2016) conducted an fMRI investigation of “simultaneous language translation” in five participants. The article presents group and individual analyses of German-to-Italian and Italian-to-German translation, confined to a small set of anatomical regions previously reported to be involved in multilingual control. Here we take the opportunity to discuss concerns regarding certain aspects of the study.
  • Heyselaar, E., Mazaheri, A., Hagoort, P., & Segaert, K. (2018). Changes in alpha activity reveal that social opinion modulates attention allocation during face processing. NeuroImage, 174, 432-440. doi:10.1016/j.neuroimage.2018.03.034.

    Abstract

    Participants’ performance differs when conducting a task in the presence of a secondary individual, moreover the opinion the participant has of this individual also plays a role. Using EEG, we investigated how previous interactions with, and evaluations of, an avatar in virtual reality subsequently influenced attentional allocation to the face of that avatar. We focused on changes in the alpha activity as an index of attentional allocation. We found that the onset of an avatar’s face whom the participant had developed a rapport with induced greater alpha suppression. This suggests greater attentional resources are allocated to the interacted-with avatars. The evaluative ratings of the avatar induced a U-shaped change in alpha suppression, such that participants paid most attention when the avatar was rated as average. These results suggest that attentional allocation is an important element of how behaviour is altered in the presence of a secondary individual and is modulated by our opinion of that individual.

    Additional information

    mmc1.docx
  • Huettig, F., Lachmann, T., Reis, A., & Petersson, K. M. (2018). Distinguishing cause from effect - Many deficits associated with developmental dyslexia may be a consequence of reduced and suboptimal reading experience. Language, Cognition and Neuroscience, 33(3), 333-350. doi:10.1080/23273798.2017.1348528.

    Abstract

    The cause of developmental dyslexia is still unknown despite decades of intense research. Many causal explanations have been proposed, based on the range of impairments displayed by affected individuals. Here we draw attention to the fact that many of these impairments are also shown by illiterate individuals who have not received any or very little reading instruction. We suggest that this fact may not be coincidental and that the performance differences of both illiterates and individuals with dyslexia compared to literate controls are, to a substantial extent, secondary consequences of either reduced or suboptimal reading experience or a combination of both. The search for the primary causes of reading impairments will make progress if the consequences of quantitative and qualitative differences in reading experience are better taken into account and not mistaken for the causes of reading disorders. We close by providing four recommendations for future research.
  • Inacio, F., Faisca, L., Forkstam, C., Araujo, S., Bramao, I., Reis, A., & Petersson, K. M. (2018). Implicit sequence learning is preserved in dyslexic children. Annals of Dyslexia, 68(1), 1-14. doi:10.1007/s11881-018-0158-x.

    Abstract

    This study investigates the implicit sequence learning abilities of dyslexic children using an artificial grammar learning task with an extended exposure period. Twenty children with developmental dyslexia participated in the study and were matched with two control groups—one matched for age and other for reading skills. During 3 days, all participants performed an acquisition task, where they were exposed to colored geometrical forms sequences with an underlying grammatical structure. On the last day, after the acquisition task, participants were tested in a grammaticality classification task. Implicit sequence learning was present in dyslexic children, as well as in both control groups, and no differences between groups were observed. These results suggest that implicit learning deficits per se cannot explain the characteristic reading difficulties of the dyslexics.
  • Jacobs, A. M., & Willems, R. M. (2018). The fictive brain: Neurocognitive correlates of engagement in literature. Review of General Psychology, 22(2), 147-160. doi:10.1037/gpr0000106.

    Abstract

    Fiction is vital to our being. Many people enjoy engaging with fiction every day. Here we focus on literary reading as 1 instance of fiction consumption from a cognitive neuroscience perspective. The brain processes which play a role in the mental construction of fiction worlds and the related engagement with fictional characters, remain largely unknown. The authors discuss the neurocognitive poetics model (Jacobs, 2015a) of literary reading specifying the likely neuronal correlates of several key processes in literary reading, namely inference and situation model building, immersion, mental simulation and imagery, figurative language and style, and the issue of distinguishing fact from fiction. An overview of recent work on these key processes is followed by a discussion of methodological challenges in studying the brain bases of fiction processing
  • Kösem, A., Bosker, H. R., Takashima, A., Meyer, A. S., Jensen, O., & Hagoort, P. (2018). Neural entrainment determines the words we hear. Current Biology, 28, 2867-2875. doi:10.1016/j.cub.2018.07.023.

    Abstract

    Low-frequency neural entrainment to rhythmic input
    has been hypothesized as a canonical mechanism
    that shapes sensory perception in time. Neural
    entrainment is deemed particularly relevant for
    speech analysis, as it would contribute to the extraction
    of discrete linguistic elements from continuous
    acoustic signals. However, its causal influence in
    speech perception has been difficult to establish.
    Here, we provide evidence that oscillations build temporal
    predictions about the duration of speech tokens
    that affect perception. Using magnetoencephalography
    (MEG), we studied neural dynamics during
    listening to sentences that changed in speech rate.
    Weobserved neural entrainment to preceding speech
    rhythms persisting for several cycles after the change
    in rate. The sustained entrainment was associated
    with changes in the perceived duration of the last
    word’s vowel, resulting in the perception of words
    with different meanings. These findings support oscillatory
    models of speech processing, suggesting that
    neural oscillations actively shape speech perception.
  • Lam, N. H. L., Hulten, A., Hagoort, P., & Schoffelen, J.-M. (2018). Robust neuronal oscillatory entrainment to speech displays individual variation in lateralisation. Language, Cognition and Neuroscience, 33(8), 943-954. doi:10.1080/23273798.2018.1437456.

    Abstract

    Neural oscillations may be instrumental for the tracking and segmentation of continuous speech. Earlier work has suggested that delta, theta and gamma oscillations entrain to the speech rhythm. We used magnetoencephalography and a large sample of 102 participants to investigate oscillatory entrainment to speech, and observed robust entrainment of delta and theta activity, and weak group-level gamma entrainment. We show that the peak frequency and the hemispheric lateralisation of the entrainment are subject to considerable individual variability. The first finding may support the involvement of intrinsic oscillations in entrainment, and the second finding suggests that there is no systematic default right-hemispheric bias for processing acoustic signals on a slow time scale. Although low frequency entrainment to speech is a robust phenomenon, the characteristics of entrainment vary across individuals, and this variation is important for understanding the underlying neural mechanisms of entrainment, as well as its functional significance.
  • Lewis, A. G., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2018). Assessing the utility of frequency tagging for tracking memory-based reactivation of word representations. Scientific Reports, 8: 7897. doi:10.1038/s41598-018-26091-3.

    Abstract

    Reinstatement of memory-related neural activity measured with high temporal precision potentially provides a useful index for real-time monitoring of the timing of activation of memory content during cognitive processing. The utility of such an index extends to any situation where one is interested in the (relative) timing of activation of different sources of information in memory, a paradigm case of which is tracking lexical activation during language processing. Essential for this approach is that memory reinstatement effects are robust, so that their absence (in the average) definitively indicates that no lexical activation is present. We used electroencephalography to test the robustness of a reported subsequent memory finding involving reinstatement of frequency-specific entrained oscillatory brain activity during subsequent recognition. Participants learned lists of words presented on a background flickering at either 6 or 15 Hz to entrain a steady-state brain response. Target words subsequently presented on a non-flickering background that were correctly identified as previously seen exhibited reinstatement effects at both entrainment frequencies. Reliability of these statistical inferences was however critically dependent on the approach used for multiple comparisons correction. We conclude that effects are not robust enough to be used as a reliable index of lexical activation during language processing.

    Additional information

    Lewis_etal_2018sup.docx
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., Nijhof, A., & Willems, R. M. (2018). The Narrative Brain Dataset (NBD), an fMRI dataset for the study of natural language processing in the brain. In B. Devereux, E. Shutova, & C.-R. Huang (Eds.), Proceedings of LREC 2018 Workshop "Linguistic and Neuro-Cognitive Resources (LiNCR) (pp. 8-11). Paris: LREC.

    Abstract

    We present the Narrative Brain Dataset, an fMRI dataset that was collected during spoken presentation of short excerpts of three
    stories in Dutch. Together with the brain imaging data, the dataset contains the written versions of the stimulation texts. The texts are
    accompanied with stochastic (perplexity and entropy) and semantic computational linguistic measures. The richness and unconstrained
    nature of the data allows the study of language processing in the brain in a more naturalistic setting than is common for fMRI studies.
    We hope that by making NBD available we serve the double purpose of providing useful neural data to researchers interested in natural
    language processing in the brain and to further stimulate data sharing in the field of neuroscience of language.
  • Manahova, M. E., Mostert, P., Kok, P., Schoffelen, J.-M., & De Lange, F. P. (2018). Stimulus familiarity and expectation jointly modulate neural activity in the visual ventral stream. Journal of Cognitive Neuroscience, 30(9), 1366-1377. doi:10.1162/jocn_a_01281.

    Abstract

    Prior knowledge about the visual world can change how a visual stimulus is processed. Two forms of prior knowledge are often distinguished: stimulus familiarity (i.e., whether a stimulus has been seen before) and stimulus expectation (i.e., whether a stimulus is expected to occur, based on the context). Neurophysiological studies in monkeys have shown suppression of spiking activity both for expected and for familiar items in object-selective inferotemporal cortex. It is an open question, however, if and how these types of knowledge interact in their modulatory effects on the sensory response. To address this issue and to examine whether previous findings generalize to noninvasively measured neural activity in humans, we separately manipulated stimulus familiarity and expectation while noninvasively recording human brain activity using magnetoencephalography. We observed independent suppression of neural activity by familiarity and expectation, specifically in the lateral occipital complex, the putative human homologue of monkey inferotemporal cortex. Familiarity also led to sharpened response dynamics, which was predominantly observed in early visual cortex. Together, these results show that distinct types of sensory knowledge jointly determine the amount of neural resources dedicated to object processing in the visual ventral stream.
  • Meyer, A. S., Alday, P. M., Decuyper, C., & Knudsen, B. (2018). Working together: Contributions of corpus analyses and experimental psycholinguistics to understanding conversation. Frontiers in Psychology, 9: 525. doi:10.3389/fpsyg.2018.00525.

    Abstract

    As conversation is the most important way of using language, linguists and psychologists should combine forces to investigate how interlocutors deal with the cognitive demands arising during conversation. Linguistic analyses of corpora of conversation are needed to understand the structure of conversations, and experimental work is indispensable for understanding the underlying cognitive processes. We argue that joint consideration of corpus and experimental data is most informative when the utterances elicited in a lab experiment match those extracted from a corpus in relevant ways. This requirement to compare like with like seems obvious but is not trivial to achieve. To illustrate this approach, we report two experiments where responses to polar (yes/no) questions were elicited in the lab and the response latencies were compared to gaps between polar questions and answers in a corpus of conversational speech. We found, as expected, that responses were given faster when they were easy to plan and planning could be initiated earlier than when they were harder to plan and planning was initiated later. Overall, in all but one condition, the latencies were longer than one would expect based on the analyses of corpus data. We discuss the implication of this partial match between the data sets and more generally how corpus and experimental data can best be combined in studies of conversation.

    Additional information

    Data_Sheet_1.pdf
  • Nieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I. and 3 moreNieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I., Kohút, Z., Rueschemeyer, S.-A., & Huettig, F. (2018). Large-scale replication study reveals a limit on probabilistic prediction in language comprehension. eLife, 7: e33468. doi:10.7554/eLife.33468.

    Abstract

    Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words.

    Additional information

    Data sets
  • Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J. T., Oostenveld, R., Schoffelen, J.-M., Tadel, F., Wexler, J., & Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5: 180110. doi:10.1038/sdata.2018.110.

    Abstract

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific
    aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond
    temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise
    magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data
    are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a
    principled solution to store, organise, process and share the multidimensional data volumes produced
    by the modality. The standard also includes well-defined metadata, to facilitate future data
    harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging
    community and paves the way to further integration of other techniques in electrophysiology. MEGBIDS
    builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several dataanalytics
    software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data
    resources available to everyone.
  • Palva, J. M., Wang, S. H., Palva, S., Zhigalov, A., Monto, S., Brookes, M. J., & Schoffelen, J.-M. (2018). Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures. NeuroImage, 173, 632-643. doi:10.1016/j.neuroimage.2018.02.032.

    Abstract

    When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study
    long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is
    nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear
    correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear
    source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based
    connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed.
    Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular
    in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here,
    however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large
    numbers of spurious false positive connections through field spread in the vicinity of true interactions. This
    fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most
    importantly, beyond defining and illustrating the problem of spurious, or “ghost” interactions, we provide a
    rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal
    mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that
    spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when
    using measures that are immune to zero-lag correlations.
  • Pascucci, D., Hervais-Adelman, A., & Plomp, G. (2018). Gating by induced A-Gamma asynchrony in selective attention. Human Brain Mapping, 39(10), 3854-3870. doi:10.1002/hbm.24216.

    Abstract

    Visual selective attention operates through top–down mechanisms of signal enhancement and suppression, mediated by a-band oscillations. The effects of such top–down signals on local processing in primary visual cortex (V1) remain poorly understood. In this work, we characterize the interplay between large-s cale interactions and local activity changes in V1 that orchestrat es selective attention, using Granger-causality and phase-amplitude coupling (PAC) analysis of EEG source signals. The task required participants to either attend to or ignore oriented gratings. Results from time-varying, directed connectivity analysis revealed frequency-specific effects of attentional selection: bottom–up g-band influences from visual areas increased rapidly in response to attended stimuli while distributed top–down a-band influences originated from parietal cortex in response to ignored stimuli. Importantly, the results revealed a critical interplay between top–down parietal signals and a–g PAC in visual areas.
    Parietal a-band influences disrupted the a–g coupling in visual cortex, which in turn reduced the amount of g-band outflow from visual area s. Our results are a first demon stration of how directed interactions affect cross-frequency coupling in downstream areas depending on task demands. These findings suggest that parietal cortex realizes selective attention by disrupting cross-frequency coupling at target regions, which prevents them from propagating task-irrelevant information.
  • Peeters, D. (2018). A standardized set of 3D-objects for virtual reality research and applications. Behavior Research Methods, 50(3), 1047-1054. doi:10.3758/s13428-017-0925-3.

    Abstract

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theory in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3D-objects for virtual reality research is important, as reaching valid theoretical conclusions critically hinges on the use of well controlled experimental stimuli. Sharing standardized 3D-objects across different virtual reality labs will allow for science to move forward more quickly.
  • Peeters, D., & Dijkstra, T. (2018). Sustained inhibition of the native language in bilingual language production: A virtual reality approach. Bilingualism: Language and Cognition, 21(5), 1035-1061. doi:10.1017/S1366728917000396.

    Abstract

    Bilinguals often switch languages as a function of the language background of their addressee. The control mechanisms supporting bilinguals' ability to select the contextually appropriate language are heavily debated. Here we present four experiments in which unbalanced bilinguals named pictures in their first language Dutch and their second language English in mixed and blocked contexts. Immersive virtual reality technology was used to increase the ecological validity of the cued language-switching paradigm. Behaviorally, we consistently observed symmetrical switch costs, reversed language dominance, and asymmetrical mixing costs. These findings indicate that unbalanced bilinguals apply sustained inhibition to their dominant L1 in mixed language settings. Consequent enhanced processing costs for the L1 in a mixed versus a blocked context were reflected by a sustained positive component in event-related potentials. Methodologically, the use of virtual reality opens up a wide range of possibilities to study language and communication in bilingual and other communicative settings.
  • Piai, V., Rommers, J., & Knight, R. T. (2018). Lesion evidence for a critical role of left posterior but not frontal areas in alpha–beta power decreases during context-driven word production. European Journal of Neuroscience, 48(7), 2622-2629. doi:10.1111/ejn.13695.

    Abstract

    Different frequency bands in the electroencephalogram are postulated to support distinct language functions. Studies have suggested
    that alpha–beta power decreases may index word-retrieval processes. In context-driven word retrieval, participants hear
    lead-in sentences that either constrain the final word (‘He locked the door with the’) or not (‘She walked in here with the’). The last
    word is shown as a picture to be named. Previous studies have consistently found alpha–beta power decreases prior to picture
    onset for constrained relative to unconstrained sentences, localised to the left lateral-temporal and lateral-frontal lobes. However,
    the relative contribution of temporal versus frontal areas to alpha–beta power decreases is unknown. We recorded the electroencephalogram
    from patients with stroke lesions encompassing the left lateral-temporal and inferior-parietal regions or left-lateral
    frontal lobe and from matched controls. Individual participant analyses revealed a behavioural sentence context facilitation effect
    in all participants, except for in the two patients with extensive lesions to temporal and inferior parietal lobes. We replicated the
    alpha–beta power decreases prior to picture onset in all participants, except for in the two same patients with extensive posterior
    lesions. Thus, whereas posterior lesions eliminated the behavioural and oscillatory context effect, frontal lesions did not. Hierarchical
    clustering analyses of all patients’ lesion profiles, and behavioural and electrophysiological effects identified those two
    patients as having a unique combination of lesion distribution and context effects. These results indicate a critical role for the left
    lateral-temporal and inferior parietal lobes, but not frontal cortex, in generating the alpha–beta power decreases underlying context-
    driven word production.
  • Poletiek, F. H., Conway, C. M., Ellefson, M. R., Lai, J., Bocanegra, B. R., & Christiansen, M. H. (2018). Under what conditions can recursion be learned? Effects of starting small in artificial grammar learning of recursive structure. Cognitive Science, 42(8), 2855-2889. doi:10.1111/cogs.12685.

    Abstract

    It has been suggested that external and/or internal limitations paradoxically may lead to superior learning, that is, the concepts of starting small and less is more (Elman, 1993; Newport, 1990). In this paper, we explore the type of incremental ordering during training that might help learning, and what mechanism explains this facilitation. We report four artificial grammar learning experiments with human participants. In Experiments 1a and 1b we found a beneficial effect of starting small using two types of simple recursive grammars: right‐branching and center‐embedding, with recursive embedded clauses in fixed positions and fixed length. This effect was replicated in Experiment 2 (N = 100). In Experiment 3 and 4, we used a more complex center‐embedded grammar with recursive loops in variable positions, producing strings of variable length. When participants were presented an incremental ordering of training stimuli, as in natural language, they were better able to generalize their knowledge of simple units to more complex units when the training input “grew” according to structural complexity, compared to when it “grew” according to string length. Overall, the results suggest that starting small confers an advantage for learning complex center‐embedded structures when the input is organized according to structural complexity.
  • Popov, T., Jensen, O., & Schoffelen, J.-M. (2018). Dorsal and ventral cortices are coupled by cross-frequency interactions during working memory. NeuroImage, 178, 277-286. doi:10.1016/j.neuroimage.2018.05.054.

    Abstract

    Oscillatory activity in the alpha and gamma bands is considered key in shaping functional brain architecture. Power
    increases in the high-frequency gamma band are typically reported in parallel to decreases in the low-frequency alpha
    band. However, their functional significance and in particular their interactions are not well understood. The present
    study shows that, in the context of an N-backworking memory task, alpha power decreases in the dorsal visual stream
    are related to gamma power increases in early visual areas. Granger causality analysis revealed directed interregional
    interactions from dorsal to ventral stream areas, in accordance with task demands. Present results reveal a robust,
    behaviorally relevant, and architectonically decisive power-to-power relationship between alpha and gamma activity.
    This relationship suggests that anatomically distant power fluctuations in oscillatory activity can link cerebral network
    dynamics on trial-by-trial basis during cognitive operations such as working memory
  • Popov, T., Oostenveld, R., & Schoffelen, J.-M. (2018). FieldTrip made easy: An analysis protocol for group analysis of the auditory steady state brain response in time, frequency, and space. Frontiers in Neuroscience, 12: 711. doi:10.3389/fnins.2018.00711.

    Abstract

    The auditory steady state evoked response (ASSR) is a robust and frequently utilized
    phenomenon in psychophysiological research. It reflects the auditory cortical response
    to an amplitude-modulated constant carrier frequency signal. The present report
    provides a concrete example of a group analysis of the EEG data from 29 healthy human
    participants, recorded during an ASSR paradigm, using the FieldTrip toolbox. First, we
    demonstrate sensor-level analysis in the time domain, allowing for a description of the
    event-related potentials (ERPs), as well as their statistical evaluation. Second, frequency
    analysis is applied to describe the spectral characteristics of the ASSR, followed by
    group level statistical analysis in the frequency domain. Third, we show how timeand
    frequency-domain analysis approaches can be combined in order to describe
    the temporal and spectral development of the ASSR. Finally, we demonstrate source
    reconstruction techniques to characterize the primary neural generators of the ASSR.
    Throughout, we pay special attention to explaining the design of the analysis pipeline
    for single subjects and for the group level analysis. The pipeline presented here can be
    adjusted to accommodate other experimental paradigms and may serve as a template
    for similar analyses.
  • Rommers, J., & Federmeier, K. D. (2018). Electrophysiological methods. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 247-265). Hoboken: Wiley.
  • Rommers, J., & Federmeier, K. D. (2018). Lingering expectations: A pseudo-repetition effect for words previously expected but not presented. NeuroImage, 183, 263-272. doi:10.1016/j.neuroimage.2018.08.023.

    Abstract

    Prediction can help support rapid language processing. However, it is unclear whether prediction has downstream
    consequences, beyond processing in the moment. In particular, when a prediction is disconfirmed, does it linger,
    or is it suppressed? This study manipulated whether words were actually seen or were only expected, and probed
    their fate in memory by presenting the words (again) a few sentences later. If disconfirmed predictions linger,
    subsequent processing of the previously expected (but never presented) word should be similar to actual word
    repetition. At initial presentation, electrophysiological signatures of prediction disconfirmation demonstrated that
    participants had formed expectations. Further downstream, relative to unseen words, repeated words elicited a
    strong N400 decrease, an enhanced late positive complex (LPC), and late alpha band power decreases. Critically,
    like repeated words, words previously expected but not presented also attenuated the N400. This “pseudorepetition
    effect” suggests that disconfirmed predictions can linger at some stages of processing, and demonstrates
    that prediction has downstream consequences beyond rapid on-line processing
  • Seeliger, K., Fritsche, M., Güçlü, U., Schoenmakers, S., Schoffelen, J.-M., Bosch, S. E., & Van Gerven, M. A. J. (2018). Convolutional neural network-based encoding and decoding of visual object recognition in space and time. NeuroImage, 180, 253-266. doi:10.1016/j.neuroimage.2017.07.018.

    Abstract

    Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely
    investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance
    imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in
    the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain
    signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we
    addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG).
    Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled
    their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward
    sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade
    was captured by the network layer representations, where the increasingly abstract stimulus representation in the
    hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral
    stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out
    validation set of viewed objects, achieving state-of-the-art decoding accuracy.
  • Segaert, K., Mazaheri, A., & Hagoort, P. (2018). Binding language: Structuring sentences through precisely timed oscillatory mechanisms. European Journal of Neuroscience, 48(7), 2651-2662. doi:10.1111/ejn.13816.

    Abstract

    Syntactic binding refers to combining words into larger structures. Using EEG, we investigated the neural processes involved in syntactic binding. Participants were auditorily presented two-word sentences (i.e. pronoun and pseudoverb such as ‘I grush’, ‘she grushes’, for which syntactic binding can take place) and wordlists (i.e. two pseudoverbs such as ‘pob grush’, ‘pob grushes’, for which no binding occurs). Comparing these two conditions, we targeted syntactic binding while minimizing contributions of semantic binding and of other cognitive processes such as working memory. We found a converging pattern of results using two distinct analysis approaches: one approach using frequency bands as defined in previous literature, and one data-driven approach in which we looked at the entire range of frequencies between 3-30 Hz without the constraints of pre-defined frequency bands. In the syntactic binding (relative to the wordlist) condition, a power increase was observed in the alpha and beta frequency range shortly preceding the presentation of the target word that requires binding, which was maximal over frontal-central electrodes. Our interpretation is that these signatures reflect that language comprehenders expect the need for binding to occur. Following the presentation of the target word in a syntactic binding context (relative to the wordlist condition), an increase in alpha power maximal over a left lateralized cluster of frontal-temporal electrodes was observed. We suggest that this alpha increase relates to syntactic binding taking place. Taken together, our findings suggest that increases in alpha and beta power are reflections of distinct the neural processes underlying syntactic binding.
  • Silva, S., Folia, V., Inácio, F., Castro, S. L., & Petersson, K. M. (2018). Modality effects in implicit artificial grammar learning: An EEG study. Brain Research, 1687, 50-59. doi:10.1016/j.brainres.2018.02.020.

    Abstract

    Recently, it has been proposed that sequence learning engages a combination of modality-specific operating networks and modality-independent computational principles. In the present study, we compared the behavioural and EEG outcomes of implicit artificial grammar learning in the visual vs. auditory modality. We controlled for the influence of surface characteristics of sequences (Associative Chunk Strength), thus focusing on the strictly structural aspects of sequence learning, and we adapted the paradigms to compensate for known frailties of the visual modality compared to audition (temporal presentation, fast presentation rate). The behavioural outcomes were similar across modalities. Favouring the idea of modality-specificity, ERPs in response to grammar violations differed in topography and latency (earlier and more anterior component in the visual modality), and ERPs in response to surface features emerged only in the auditory modality. In favour of modality-independence, we observed three common functional properties in the late ERPs of the two grammars: both were free of interactions between structural and surface influences, both were more extended in a grammaticality classification test than in a preference classification test, and both correlated positively and strongly with theta event-related-synchronization during baseline testing. Our findings support the idea of modality-specificity combined with modality-independence, and suggest that memory for visual vs. auditory sequences may largely contribute to cross-modal differences.
  • Sjerps, M. J., Zhang, C., & Peng, G. (2018). Lexical Tone is Perceived Relative to Locally Surrounding Context, Vowel Quality to Preceding Context. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 914-924. doi:10.1037/xhp0000504.

    Abstract

    Important speech cues such as lexical tone and vowel quality are perceptually contrasted to the distribution of those same cues in surrounding contexts. However, it is unclear whether preceding and following contexts have similar influences, and to what extent those influences are modulated by the auditory history of previous trials. To investigate this, Cantonese participants labeled sounds from (a) a tone continuum (mid- to high-level), presented with a context that had raised or lowered F0 values and (b) a vowel quality continuum (/u/ to /o/), where the context had raised or lowered F1 values. Contexts with high or low F0/F1 were presented in separate blocks or intermixed in 1 block. Contexts were presented following (Experiment 1) or preceding the target continuum (Experiment 2). Contrastive effects were found for both tone and vowel quality (e.g., decreased F0 values in contexts lead to more high tone target judgments and vice versa). Importantly, however, lexical tone was only influenced by F0 in immediately preceding and following contexts. Vowel quality was only influenced by the F1 in preceding contexts, but this extended to contexts from preceding trials. Contextual influences on tone and vowel quality are qualitatively different, which has important implications for understanding the mechanism of context effects in speech perception.
  • Stolk, A., Griffin, S., Van der Meij, R., Dewar, C., Saez, I., Lin, J. J., Piantoni, G., Schoffelen, J.-M., Knight, R. T., & Oostenveld, R. (2018). Integrated analysis of anatomical and electrophysiological human intracranial data. Nature Protocols, 13, 1699-1723. doi:10.1038/s41596-018-0009-6.

    Abstract

    Human intracranial electroencephalography (iEEG) recordings provide data with much greater spatiotemporal precision
    than is possible from data obtained using scalp EEG, magnetoencephalography (MEG), or functional MRI. Until recently,
    the fusion of anatomical data (MRI and computed tomography (CT) images) with electrophysiological data and their
    subsequent analysis have required the use of technologically and conceptually challenging combinations of software.
    Here, we describe a comprehensive protocol that enables complex raw human iEEG data to be converted into more readily
    comprehensible illustrative representations. The protocol uses an open-source toolbox for electrophysiological data
    analysis (FieldTrip). This allows iEEG researchers to build on a continuously growing body of scriptable and reproducible
    analysis methods that, over the past decade, have been developed and used by a large research community. In this
    protocol, we describe how to analyze complex iEEG datasets by providing an intuitive and rapid approach that can handle
    both neuroanatomical information and large electrophysiological datasets. We provide a worked example using
    an example dataset. We also explain how to automate the protocol and adjust the settings to enable analysis of
    iEEG datasets with other characteristics. The protocol can be implemented by a graduate student or postdoctoral
    fellow with minimal MATLAB experience and takes approximately an hour to execute, excluding the automated cortical
    surface extraction.
  • Tan, Y., & Martin, R. C. (2018). Verbal short-term memory capacities and executive function in semantic and syntactic interference resolution during sentence comprehension: Evidence from aphasia. Neuropsychologia, 113, 111-125. doi:10.1016/j.neuropsychologia.2018.03.001.

    Abstract

    This study examined the role of verbal short-term memory (STM) and executive function (EF) underlying semantic and syntactic interference resolution during sentence comprehension for persons with aphasia (PWA) with varying degrees of STM and EF deficits. Semantic interference was manipulated by varying the semantic plausibility of the intervening NP as subject of the verb and syntactic interference was manipulated by varying whether the NP was another subject or an object. Nine PWA were assessed on sentence reading times and on comprehension question performance. PWA showed exaggerated semantic and syntactic interference effects relative to healthy age-matched control subjects. Importantly, correlational analyses showed that while answering comprehension questions, PWA’ semantic STM capacity related to their ability to resolve semantic but not syntactic interference. In contrast, PWA’ EF abilities related to their ability to resolve syntactic but not semantic interference. Phonological STM deficits were not related to the ability to resolve either type of interference. The results for semantic STM are consistent with prior findings indicating a role for semantic but not phonological STM in sentence comprehension, specifically with regard to maintaining semantic information prior to integration. The results for syntactic interference are consistent with the recent findings suggesting that EF is critical for syntactic processing.
  • Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2018). The combined use of Virtual Reality and EEG to study language processing in naturalistic environments. Behavior Research Methods, 50(2), 862-869. doi:10.3758/s13428-017-0911-9.

    Abstract

    When we comprehend language, we often do this in rich settings in which we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and non-linguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and Virtual Reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant, while wearing EEG equipment. In the restaurant participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g. a plate with salmon). The restaurant guest would then produce a sentence (e.g. “I just ordered this salmon.”). The noun in the spoken sentence could either match (“salmon”) or mismatch (“pasta”) with the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings.
  • Udden, J., & Männel, C. (2018). Artificial grammar learning and its neurobiology in relation to language processing and development. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 755-783). Oxford: Oxford University Press.

    Abstract

    The artificial grammar learning (AGL) paradigm enables systematic investigation of the acquisition of linguistically relevant structures. It is a paradigm of interest for language processing research, interfacing with theoretical linguistics, and for comparative research on language acquisition and evolution. This chapter presents a key for understanding major variants of the paradigm. An unbiased summary of neuroimaging findings of AGL is presented, using meta-analytic methods, pointing to the crucial involvement of the bilateral frontal operculum and regions in the right lateral hemisphere. Against a background of robust posterior temporal cortex involvement in processing complex syntax, the evidence for involvement of the posterior temporal cortex in AGL is reviewed. Infant AGL studies testing for neural substrates are reviewed, covering the acquisition of adjacent and non-adjacent dependencies as well as algebraic rules. The language acquisition data suggest that comparisons of learnability of complex grammars performed with adults may now also be possible with children.
  • Van den Broek, G., Takashima, A., Segers, E., & Verhoeven, L. (2018). Contextual Richness and Word Learning: Context Enhances Comprehension but Retrieval Enhances Retention. Language Learning, 68(2), 546-585. doi:10.1111/lang.12285.

    Abstract

    Learning new vocabulary from context typically requires multiple encounters during which word meaning can be retrieved from memory or inferred from context. We compared the effect of memory retrieval and context inferences on short‐ and long‐term retention in three experiments. Participants studied novel words and then practiced the words either in an uninformative context that required the retrieval of word meaning from memory (“I need the funguo”) or in an informative context from which word meaning could be inferred (“I want to unlock the door: I need the funguo”). The informative context facilitated word comprehension during practice. However, later recall of word form and meaning and word recognition in a new context were better after successful retrieval practice and retrieval practice with feedback than after context‐inference practice. These findings suggest benefits of retrieval during contextualized vocabulary learning whereby the uninformative context enhanced word retention by triggering memory retrieval.
  • Van Bergen, G., & Bosker, H. R. (2018). Linguistic expectation management in online discourse processing: An investigation of Dutch inderdaad 'indeed' and eigenlijk 'actually'. Journal of Memory and Language, 103, 191-209. doi:10.1016/j.jml.2018.08.004.

    Abstract

    Interpersonal discourse particles (DPs), such as Dutch inderdaad (≈‘indeed’) and eigenlijk (≈‘actually’) are highly frequent in everyday conversational interaction. Despite extensive theoretical descriptions of their polyfunctionality, little is known about how they are used by language comprehenders. In two visual world eye-tracking experiments involving an online dialogue completion task, we asked to what extent inderdaad, confirming an inferred expectation, and eigenlijk, contrasting with an inferred expectation, influence real-time understanding of dialogues. Answers in the dialogues contained a DP or a control adverb, and a critical discourse referent was replaced by a beep; participants chose the most likely dialogue completion by clicking on one of four referents in a display. Results show that listeners make rapid and fine-grained situation-specific inferences about the use of DPs, modulating their expectations about how the dialogue will unfold. Findings further specify and constrain theories about the conversation-managing function and polyfunctionality of DPs.
  • Van Campen, A. D., Kunert, R., Van den Wildenberg, W. P. M., & Ridderinkhof, K. R. (2018). Repetitive transcranial magnetic stimulation over inferior frontal cortex impairs the suppression (but not expression) of action impulses during action conflict. Psychophysiology, 55(3): e13003. doi:10.1111/psyp.13003.

    Abstract

    In the recent literature, the effects of noninvasive neurostimulation on cognitive functioning appear to lack consistency and replicability. We propose that such effects may be concealed unless dedicated, sensitive, and process-specific dependent measures are used. The expression and subsequent suppression of response capture are often studied using conflict tasks. Response-time distribution analyses have been argued to provide specific measures of the susceptibility to make fast impulsive response errors, as well as the proficiency of the selective suppression of these impulses. These measures of response capture and response inhibition are particularly sensitive to experimental manipulations and clinical deficiencies that are typically obfuscated in commonly used overall performance analyses. Recent work using structural and functional imaging techniques links these behavioral outcome measures to the integrity of frontostriatal networks. These studies suggest that the presupplementary motor area (pre-SMA) is linked to the susceptibility to response capture whereas the right inferior frontal cortex (rIFC) is associated with the selective suppression of action impulses. Here, we used repetitive transcranial magnetic stimulation (rTMS) to test the causal involvement of these two cortical areas in response capture and inhibition in the Simon task. Disruption of rIFC function specifically impaired selective suppression of conflicting action tendencies, whereas the anticipated increase of fast impulsive errors after perturbing pre-SMA function was not confirmed. These results provide a proof of principle of the notion that the selection of appropriate dependent measures is perhaps crucial to establish the effects of neurostimulation on specific cognitive functions.
  • Vanlangendonck, F., Takashima, A., Willems, R. M., & Hagoort, P. (2018). Distinguishable memory retrieval networks for collaboratively and non-collaboratively learned information. Neuropsychologia, 111, 123-132. doi:10.1016/j.neuropsychologia.2017.12.008.

    Abstract

    Learning often occurs in communicative and collaborative settings, yet almost all research into the neural basis of memory relies on participants encoding and retrieving information on their own. We investigated whether learning linguistic labels in a collaborative context at least partly relies on cognitively and neurally distinct representations, as compared to learning in an individual context. Healthy human participants learned labels for sets of abstract shapes in three different tasks. They came up with labels with another person in a collaborative communication task (collaborative condition), by themselves (individual condition), or were given pre-determined unrelated labels to learn by themselves (arbitrary condition). Immediately after learning, participants retrieved and produced the labels aloud during a communicative task in the MRI scanner. The fMRI results show that the retrieval of collaboratively generated labels as compared to individually learned labels engages brain regions involved in understanding others (mentalizing or theory of mind) and autobiographical memory, including the medial prefrontal cortex, the right temporoparietal junction and the precuneus. This study is the first to show that collaboration during encoding affects the neural networks involved in retrieval.
  • Vanlangendonck, F., Willems, R. M., & Hagoort, P. (2018). Taking common ground into account: Specifying the role of the mentalizing network in communicative language production. PLoS One, 13(10): e0202943. doi:10.1371/journal.pone.0202943.
  • Varma, S., Daselaar, S. M., Kessels, R. P. C., & Takashima, A. (2018). Promotion and suppression of autobiographical thinking differentially affect episodic memory consolidation. PLoS One, 13(8): e0201780. doi:10.1371/journal.pone.0201780.

    Abstract

    During a post-encoding delay period, the ongoing consolidation of recently acquired memories can suffer interference if the delay period involves encoding of new memories, or sensory stimulation tasks. Interestingly, two recent independent studies suggest that (i) autobiographical thinking also interferes markedly with ongoing consolidation of recently learned wordlist material, while (ii) a 2-Back task might not interfere with ongoing consolidation, possibly due to the suppression of autobiographical thinking. In this study, we directly compare these conditions against a quiet wakeful rest baseline to test whether the promotion (via familiar sound-cues) or suppression (via a 2-Back task) of autobiographical thinking during the post-encoding delay period can affect consolidation of studied wordlists in a negative or a positive way, respectively. Our results successfully replicate previous studies and show a significant interference effect (as compared to the rest condition) when learning is followed by familiar sound-cues that promote autobiographical thinking, whereas no interference effect is observed when learning is followed by the 2-Back task. Results from a post-experimental experience-sampling questionnaire further show significant differences in the degree of autobiographical thinking reported during the three post-encoding periods: highest in the presence of sound-cues and lowest during the 2-Back task. In conclusion, our results suggest that varying levels of autobiographical thought during the post-encoding period may modulate episodic memory consolidation.
  • Wang, L., Hagoort, P., & Jensen, O. (2018). Language prediction is reflected by coupling between frontal gamma and posterior alpha oscillations. Journal of Cognitive Neuroscience, 30(3), 432-447. doi:10.1162/jocn_a_01190.

    Abstract

    Readers and listeners actively predict upcoming words during language processing. These predictions might serve to support the unification of incoming words into sentence context and thus rely on interactions between areas in the language network. In the current magnetoencephalography study, participants read sentences that varied in contextual constraints so that the predictability of the sentence-final words was either high or low. Before the sentence-final words, we observed stronger alpha power suppression for the highly compared with low constraining sentences in the left inferior frontal cortex, left posterior temporal region, and visual word form area. Importantly, the temporal and visual word form area alpha power correlated negatively with left frontal gamma power for the highly constraining sentences. We suggest that the correlation between alpha power decrease in temporal language areas and left prefrontal gamma power reflects the initiation of an anticipatory unification process in the language network.
  • Wang, L., Hagoort, P., & Jensen, O. (2018). Gamma oscillatory activity related to language prediction. Journal of Cognitive Neuroscience, 30(8), 1075-1085. doi:10.1162/jocn_a_01275.

    Abstract

    Using magnetoencephalography, the current study examined gamma activity associated with language prediction. Participants read high- and low-constraining sentences in which the final word of the sentence was either expected or unexpected. Although no consistent gamma power difference induced by the sentence-final words was found between the expected and unexpected conditions, the correlation of gamma power during the prediction and activation intervals of the sentence-final words was larger when the presented words matched with the prediction compared with when the prediction was violated or when no prediction was available. This suggests that gamma magnitude relates to the match between predicted and perceived words. Moreover, the expected words induced activity with a slower gamma frequency compared with that induced by unexpected words. Overall, the current study establishes that prediction is related to gamma power correlations and a slowing of the gamma frequency.
  • Willems, R. M., & Cristia, A. (2018). Hemodynamic methods: fMRI and fNIRS. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 266-287). Hoboken: Wiley.
  • Willems, R. M., & Van Gerven, M. (2018). New fMRI methods for the study of language. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 975-991). Oxford: Oxford University Press.
  • Acheson, D. J. (2013). Signatures of response conflict monitoring in language production. Procedia - Social and Behavioral Sciences, 94, 214-215. doi:10.1016/j.sbspro.2013.09.106.
  • Acheson, D. J., & Hagoort, P. (2013). Stimulating the brain's language network: Syntactic ambiguity resolution after TMS to the IFG and MTG. Journal of Cognitive Neuroscience, 25(10), 1664-1677. doi:10.1162/jocn_a_00430.

    Abstract

    The posterior middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) are two critical nodes of the brain's language network. Previous neuroimaging evidence has supported a dissociation in language comprehension in which parts of the MTG are involved in the retrieval of lexical syntactic information and the IFG is involved in unification operations that maintain, select, and integrate multiple sources of information over time. In the present investigation, we tested for causal evidence of this dissociation by modulating activity in IFG and MTG using an offline TMS procedure: continuous theta-burst stimulation. Lexical–syntactic retrieval was manipulated by using sentences with and without a temporarily word-class (noun/verb) ambiguity (e.g., run). In one group of participants, TMS was applied to the IFG and MTG, and in a control group, no TMS was applied. Eye movements were recorded and quantified at two critical sentence regions: a temporarily ambiguous region and a disambiguating region. Results show that stimulation of the IFG led to a modulation of the ambiguity effect (ambiguous–unambiguous) at the disambiguating sentence region in three measures: first fixation durations, total reading times, and regressive eye movements into the region. Both IFG and MTG stimulation modulated the ambiguity effect for total reading times in the temporarily ambiguous sentence region relative to a control group. The current results demonstrate that an offline repetitive TMS protocol can have influences at a different point in time during online processing and provide causal evidence for IFG involvement in unification operations during sentence comprehension.
  • Andics, A., McQueen, J. M., & Petersson, K. M. (2013). Mean-based neural coding of voices. NeuroImage, 79, 351-360. doi:10.1016/j.neuroimage.2013.05.002.

    Abstract

    The social significance of recognizing the person who talks to us is obvious, but the neural mechanisms that mediate talker identification are unclear. Regions along the bilateral superior temporal sulcus (STS) and the inferior frontal cortex (IFC) of the human brain are selective for voices, and they are sensitive to rapid voice changes. Although it has been proposed that voice recognition is supported by prototype-centered voice representations, the involvement of these category-selective cortical regions in the neural coding of such "mean voices" has not previously been demonstrated. Using fMRI in combination with a voice identity learning paradigm, we show that voice-selective regions are involved in the mean-based coding of voice identities. Voice typicality is encoded on a supra-individual level in the right STS along a stimulus-dependent, identity-independent (i.e., voice-acoustic) dimension, and on an intra-individual level in the right IFC along a stimulus-independent, identity-dependent (i.e., voice identity) dimension. Voice recognition therefore entails at least two anatomically separable stages, each characterized by neural mechanisms that reference the central tendencies of voice categories.
  • Asaridou, S. S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence for shared domain-general mechanisms. Frontiers in Psychology, 4: 321. doi:10.3389/fpsyg.2013.00321.

    Abstract

    Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
  • De Boer, M., Toni, I., & Willems, R. M. (2013). What drives successful verbal communication? Frontiers in Human Neuroscience, 7: 622. doi:10.3389/fnhum.2013.00622.

    Abstract

    There is a vast amount of potential mappings between behaviors and intentions in communication: a behavior can indicate a multitude of different intentions, and the same intention can be communicated with a variety of behaviors. Humans routinely solve these many-to-many referential problems when producing utterances for an Addressee. This ability might rely on social cognitive skills, for instance, the ability to manipulate unobservable summary variables to disambiguate ambiguous behavior of other agents (“mentalizing”) and the drive to invest resources into changing and understanding the mental state of other agents (“communicative motivation”). Alternatively, the ambiguities of verbal communicative interactions might be solved by general-purpose cognitive abilities that process cues that are incidentally associated with the communicative interaction. In this study, we assess these possibilities by testing which cognitive traits account for communicative success during a verbal referential task. Cognitive traits were assessed with psychometric scores quantifying motivation, mentalizing abilities, and general-purpose cognitive abilities, taxing abstract visuo-spatial abilities. Communicative abilities of participants were assessed by using an on-line interactive task that required a speaker to verbally convey a concept to an Addressee. The communicative success of the utterances was quantified by measuring how frequently a number of Evaluators would infer the correct concept. Speakers with high motivational and general-purpose cognitive abilities generated utterances that were more easily interpreted. These findings extend to the domain of verbal communication the notion that motivational and cognitive factors influence the human ability to rapidly converge on shared communicative innovations.
  • Campisi, E., & Ozyurek, A. (2013). Iconicity as a communicative strategy: Recipient design in multimodal demonstrations for adults and children. Journal of Pragmatics, 47, 14-27. doi:10.1016/j.pragma.2012.12.007.

    Abstract

    Humans are the only species that uses communication to teach new knowledge to novices, usually to children (Tomasello, 1999 and Csibra and Gergely, 2006). This context of communication can employ “demonstrations” and it takes place with or without the help of objects (Clark, 1996). Previous research has focused on understanding the nature of demonstrations for very young children and with objects involved. However, little is known about the strategies used in demonstrating an action to an older child in comparison to another adult and without the use of objects, i.e., with gestures only. We tested if during demonstration of an action speakers use different degrees of iconicity in gestures for a child compared to an adult. 18 Italian subjects described to a camera how to make coffee imagining the listener as a 12-year-old child, a novice or an expert adult. While speech was found more informative both for the novice adult and for the child compared to the expert adult, the rate of iconic gestures increased and they were more informative and bigger only for the child compared to both of the adult conditions. Iconicity in gestures can be a powerful communicative strategy in teaching new knowledge to children in demonstrations and this is in line with claims that it can be used as a scaffolding device in grounding knowledge in experience (Perniss et al., 2010).
  • Cappuccio, M. L., Chu, M., & Kita, S. (2013). Pointing as an instrumental gesture: Gaze representation through indication. Humana.Mente: Journal of Philosophical Studies, 24, 125-149.

    Abstract

    We call those gestures “instrumental” that can enhance certain thinking processes of an agent by offering him representational models of his actions in a virtual space of imaginary performative possibilities. We argue that pointing is an instrumental gesture in that it represents geometrical information on one’s own gaze direction (i.e., a spatial model for attentional/ocular fixation/orientation), and provides a ritualized template for initiating gaze coordination and joint attention. We counter two possible objections, asserting respectively that the representational content of pointing is not constitutive, but derived from language, and that pointing directly solicits gaze coordination, without representing it. We consider two studies suggesting that attention and spatial perception are actively modified by one’s own pointing activity: the first study shows that pointing gestures help children link sets of objects to their corresponding number words; the second, that adults are faster and more accurate in counting when they point.
  • Cristia, A., Dupoux, E., Hakuno, Y., Lloyd-Fox, S., Schuetze, M., Kivits, J., Bergvelt, T., Van Gelder, M., Filippin, L., Charron, S., & Minagawa-Kawai, Y. (2013). An online database of infant functional Near InfraRed Spectroscopy studies: A community-augmented systematic review. PLoS One, 8(3): e58906. doi:10.1371/journal.pone.0058906.

    Abstract

    Until recently, imaging the infant brain was very challenging. Functional Near InfraRed Spectroscopy (fNIRS) is a promising, relatively novel technique, whose use is rapidly expanding. As an emergent field, it is particularly important to share methodological knowledge to ensure replicable and robust results. In this paper, we present a community-augmented database which will facilitate precisely this exchange. We tabulated articles and theses reporting empirical fNIRS research carried out on infants below three years of age along several methodological variables. The resulting spreadsheet has been uploaded in a format allowing individuals to continue adding new results, and download the most recent version of the table. Thus, this database is ideal to carry out systematic reviews. We illustrate its academic utility by focusing on the factors affecting three key variables: infant attrition, the reliability of oxygenated and deoxygenated responses, and signal-to-noise ratios. We then discuss strengths and weaknesses of the DBIfNIRS, and conclude by suggesting a set of simple guidelines aimed to facilitate methodological convergence through the standardization of reports.
  • Cristia, A. (2013). Input to language: The phonetics of infant-directed speech. Language and Linguistics Compass, 7, 157-170. doi:10.1111/lnc3.12015.

    Abstract

    Over the first year of life, infant perception changes radically as the child learns the phonology of the ambient language from the speech she is exposed to. Since infant-directed speech attracts the child's attention more than other registers, it is necessary to describe that input in order to understand language development, and to address questions of learnability. In this review, evidence from corpora analyses, experimental studies, and observational paradigms is brought together to outline the first comprehensive empirical picture of infant-directed speech and its effects on language acquisition. The ensuing landscape suggests that infant-directed speech provides an emotionally and linguistically rich input to language acquisition

    Additional information

    Cristia_Suppl_Material.xls
  • Cristia, A., Mielke, J., Daland, R., & Peperkamp, S. (2013). Similarity in the generalization of implicitly learned sound patterns. Journal of Laboratory Phonology, 4(2), 259-285.

    Abstract

    A core property of language is the ability to generalize beyond observed examples. In two experiments, we explore how listeners generalize implicitly learned sound patterns to new nonwords and to new sounds, with the goal of shedding light on how similarity affects treatment of potential generalization targets. During the exposure phase, listeners heard nonwords whose onset consonant was restricted to a subset of a natural class (e.g., /d g v z Z/). During the test phase, listeners were presented with new nonwords and asked to judge how frequently they had been presented before; some of the test items began with a consonant from the exposure set (e.g., /d/), and some began with novel consonants with varying relations to the exposure set (e.g., /b/, which is highly similar to all onsets in the training set; /t/, which is highly similar to one of the training onsets; and /p/, which is less similar than the other two). The exposure onset was rated most frequent, indicating that participants encoded onset attestation in the exposure set, and generalized it to new nonwords. Participants also rated novel consonants as somewhat frequent, indicating generalization to onsets that did not occur in the exposure phase. While generalization could be accounted for in terms of featural distance, it was insensitive to natural class structure. Generalization to new sounds was predicted better by models requiring prior linguistic knowledge (either traditional distinctive features or articulatory phonetic information) than by a model based on a linguistically naïve measure of acoustic similarity.
  • Debreslioska, S., Ozyurek, A., Gullberg, M., & Perniss, P. M. (2013). Gestural viewpoint signals referent accessibility. Discourse Processes, 50(7), 431-456. doi:10.1080/0163853x.2013.824286.

    Abstract

    The tracking of entities in discourse is known to be a bimodal phenomenon. Speakers achieve cohesion in speech by alternating between full lexical forms, pronouns, and zero anaphora as they track referents. They also track referents in co-speech gestures. In this study, we explored how viewpoint is deployed in reference tracking, focusing on representations of animate entities in German narrative discourse. We found that gestural viewpoint systematically varies depending on discourse context. Speakers predominantly use character viewpoint in maintained contexts and observer viewpoint in reintroduced contexts. Thus, gestural viewpoint seems to function as a cohesive device in narrative discourse. The findings expand on and provide further evidence for the coordination between speech and gesture on the discourse level that is crucial to understanding the tight link between the two modalities.
  • Dolscheid, S. (2013). High pitches and thick voices: The role of language in space-pitch associations. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Dolscheid, S., Graver, C., & Casasanto, D. (2013). Spatial congruity effects reveal metaphors, not markedness. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2213-2218). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0405/index.html.

    Abstract

    Spatial congruity effects have often been interpreted as evidence for metaphorical thinking, but an alternative markedness-based account challenges this view. In two experiments, we directly compared metaphor and markedness explanations for spatial congruity effects, using musical pitch as a testbed. English speakers who talk about pitch in terms of spatial height were tested in speeded space-pitch compatibility tasks. To determine whether space-pitch congruency effects could be elicited by any marked spatial continuum, participants were asked to classify high- and low-frequency pitches as 'high' and 'low' or as 'front' and 'back' (both pairs of terms constitute cases of marked continuums). We found congruency effects in high/low conditions but not in front/back conditions, indicating that markedness is not sufficient to account for congruity effects (Experiment 1). A second experiment showed that congruency effects were specific to spatial words that cued a vertical schema (tall/short), and that congruity effects were not an artifact of polysemy (e.g., 'high' referring both to space and pitch). Together, these results suggest that congruency effects reveal metaphorical uses of spatial schemas, not markedness effects.
  • Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2013). The thickness of musical pitch: Psychophysical evidence for linguistic relativity. Psychological Science, 24, 613-621. doi:10.1177/0956797612457374.

    Abstract

    Do people who speak different languages think differently, even when they are not using language? To find out, we used nonlinguistic psychophysical tasks to compare mental representations of musical pitch in native speakers of Dutch and Farsi. Dutch speakers describe pitches as high (hoog) or low (laag), whereas Farsi speakers describe pitches as thin (na-zok) or thick (koloft). Differences in language were reflected in differences in performance on two pitch-reproduction tasks, even though the tasks used simple, nonlinguistic stimuli and responses. To test whether experience using language influences mental representations of pitch, we trained native Dutch speakers to describe pitch in terms of thickness, as Farsi speakers do. After the training, Dutch speakers’ performance on a nonlinguistic psychophysical task resembled the performance of native Farsi speakers. People who use different linguistic space-pitch metaphors also think about pitch differently. Language can play a causal role in shaping nonlinguistic representations of musical pitch.

    Additional information

    DS_10.1177_0956797612457374.pdf
  • Eisner, F., Melinger, A., & Weber, A. (2013). Constraints on the transfer of perceptual learning in accented speech. Frontiers in Psychology, 4: 148. doi:10.3389/fpsyg.2013.00148.

    Abstract

    The perception of speech sounds can be re-tuned rapidly through a mechanism of lexically-driven learning (Norris et al 2003, Cogn.Psych. 47). Here we investigated this type of learning for English voiced stop consonants which are commonly de-voiced in word final position by Dutch learners of English . Specifically, this study asked under which conditions the change in pre-lexical representation encodes phonological information about the position of the critical sound within a word. After exposure to a Dutch learner’s productions of de-voiced stops in word-final position (but not in any other positions), British English listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with voiceless final stops (e.g., ‘seat’), facilitated recognition of visual targets with voiced final stops (e.g., SEED). This learning generalized to test pairs where the critical contrast was in word-initial position, e.g. auditory primes such as ‘town’ facilitated recognition of visual targets like DOWN (Experiment 1). Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). These results suggest that word position can be encoded in the pre-lexical adjustment to the accented phoneme contrast. Lexcially-guided feedback, distributional properties of the input, and long-term representations of accents all appear to modulate the pre-lexical re-tuning of phoneme categories.
  • Erb, J., Henry, M. J., Eisner, F., & Obleser, J. (2013). The brain dynamics of rapid perceptual adaptation to adverse listening conditions. The Journal of Neuroscience, 33, 10688-10697. doi:10.1523/​JNEUROSCI.4596-12.2013.

    Abstract

    Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an “executive” network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic “language” areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory–language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.
  • Gentner, D., Ozyurek, A., Gurcanli, O., & Goldin-Meadow, S. (2013). Spatial language facilitates spatial cognition: Evidence from children who lack language input. Cognition, 127, 318-330. doi:10.1016/j.cognition.2013.01.003.

    Abstract

    Does spatial language influence how people think about space? To address this question, we observed children who did not know a conventional language, and tested their performance on nonlinguistic spatial tasks. We studied deaf children living in Istanbul whose hearing losses prevented them from acquiring speech and whose hearing parents had not exposed them to sign. Lacking a conventional language, the children used gestures, called homesigns, to communicate. In Study 1, we asked whether homesigners used gesture to convey spatial relations, and found that they did not. In Study 2, we tested a new group of homesigners on a Spatial Mapping Task, and found that they performed significantly worse than hearing Turkish children who were matched to the deaf children on another cognitive task. The absence of spatial language thus went hand-in-hand with poor performance on the nonlinguistic spatial task, pointing to the importance of spatial language in thinking about space.
  • Gijssels, T., Bottini, R., Rueschemeyer, S.-A., & Casasanto, D. (2013). Space and time in the parietal cortex: fMRI Evidence for a meural asymmetry. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 495-500). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0113/index.html.

    Abstract

    How are space and time related in the brain? This study contrasts two proposals that make different predictions about the interaction between spatial and temporal magnitudes. Whereas ATOM implies that space and time are symmetrically related, Metaphor Theory claims they are asymmetrically related. Here we investigated whether space and time activate the same neural structures in the inferior parietal cortex (IPC) and whether the activation is symmetric or asymmetric across domains. We measured participants’ neural activity while they made temporal and spatial judgments on the same visual stimuli. The behavioral results replicated earlier observations of a space-time asymmetry: Temporal judgments were more strongly influenced by irrelevant spatial information than vice versa. The BOLD fMRI data indicated that space and time activated overlapping clusters in the IPC and that, consistent with Metaphor Theory, this activation was asymmetric: The shared region of IPC was activated more strongly during temporal judgments than during spatial judgments. We consider three possible interpretations of this neural asymmetry, based on 3 possible functions of IPC.
  • Gross, J., Baillet, S., Barnes, G. R., Henson, R. N., Hillebrand, A., Jensen, O., Jerbi, K., Litvak, V., Maess, B., Oostenveld, R., Parkkonen, L., Taylor, J. R., Van Wassenhove, V., Wibral, M., & Schoffelen, J.-M. (2013). Good practice for conducting and reporting MEG research. NeuroImage, 65, 349-363. doi:10.1016/j.neuroimage.2012.10.001.

    Abstract

    Magnetoencephalographic (MEG) recordings are a rich source of information about the neural dynamics underlying cognitive processes in the brain, with excellent temporal and good spatial resolution. In recent years there have been considerable advances in MEG hardware developments as well as methodological developments. Sophisticated analysis techniques are now routinely applied and continuously improved, leading to fascinating insights into the intricate dynamics of neural processes. However, the rapidly increasing level of complexity of the different steps in a MEG study make it difficult for novices, and sometimes even for experts, to stay aware of possible limitations and caveats. Furthermore, the complexity of MEG data acquisition and data analysis requires special attention when describing MEG studies in publications, in order to facilitate interpretation and reproduction of the results. This manuscript aims at making recommendations for a number of important data acquisition and data analysis steps and suggests details that should be specified in manuscripts reporting MEG studies. These recommendations will hopefully serve as guidelines that help to strengthen the position of the MEG research community within the field of neuroscience, and may foster discussion within the community in order to further enhance the quality and impact of MEG research.
  • Hagoort, P. (2013). MUC (Memory, Unification, Control) and beyond. Frontiers in Psychology, 4: 416. doi:10.3389/fpsyg.2013.00416.

    Abstract

    A neurobiological model of language is discussed that overcomes the shortcomings of the classical Wernicke-Lichtheim-Geschwind model. It is based on a subdivision of language processing into three components: Memory, Unification, and Control. The functional components as well as the neurobiological underpinnings of the model are discussed. In addition, the need for extension of the model beyond the classical core regions for language is shown. Attentional networks as well as networks for inferential processing are crucial to realize language comprehension beyond single word processing and beyond decoding propositional content. It is shown that this requires the dynamic interaction between multiple brain regions.
  • Hagoort, P., & Poeppel, D. (2013). The infrastructure of the language-ready brain. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 233-255). Cambridge, MA: MIT Press.

    Abstract

    This chapter sketches in very general terms the cognitive architecture of both language comprehension and production, as well as the neurobiological infrastructure that makes the human brain ready for language. Focus is on spoken language, since that compares most directly to processing music. It is worth bearing in mind that humans can also interface with language as a cognitive system using sign and text (visual) as well as Braille (tactile); that is to say, the system can connect with input/output processes in any sensory modality. Language processing consists of a complex and nested set of subroutines to get from sound to meaning (in comprehension) or meaning to sound (in production), with remarkable speed and accuracy. The fi rst section outlines a selection of the major constituent operations, from fractionating the input into manageable units to combining and unifying information in the construction of meaning. The next section addresses the neurobiological infrastructure hypothesized to form the basis for language processing. Principal insights are summarized by building on the notion of “brain networks” for speech–sound processing, syntactic processing, and the construction of meaning, bearing in mind that such a neat three-way subdivision overlooks important overlap and shared mechanisms in the neural architecture subserving language processing. Finally, in keeping with the spirit of the volume, some possible relations are highlighted between language and music that arise from the infrastructure developed here. Our characterization of language and its neurobiological foundations is necessarily selective and brief. Our aim is to identify for the reader critical questions that require an answer to have a plausible cognitive neuroscience of language processing.
  • Hagoort, P., & Meyer, A. S. (2013). What belongs together goes together: the speaker-hearer perspective. A commentary on MacDonald's PDC account. Frontiers in Psychology, 4: 228. doi:10.3389/fpsyg.2013.00228.

    Abstract

    First paragraph:
    MacDonald (2013) proposes that distributional properties of language and processing biases in language comprehension can to a large extent be attributed to consequences of the language production process. In essence, the account is derived from the principle of least effort that was formulated by Zipf, among others (Zipf, 1949; Levelt, 2013). However, in Zipf's view the outcome of the least effort principle was a compromise between least effort for the speaker and least effort for the listener, whereas MacDonald puts most of the burden on the production process.
  • Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2560-2565). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0463/index.html.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from these different modalities, and how perceived communicative intentions, often signaled through visual signals, such as eye
    gaze, may influence this processing. We address this question by simulating a triadic communication context in which a
    speaker alternated her gaze between two different recipients. Participants thus viewed speech-only or speech+gesture
    object-related utterances when being addressed (direct gaze) or unaddressed (averted gaze). Two object images followed
    each message and participants’ task was to choose the object that matched the message. Unaddressed recipients responded significantly slower than addressees for speech-only
    utterances. However, perceiving the same speech accompanied by gestures sped them up to a level identical to
    that of addressees. That is, when speech processing suffers due to not being addressed, gesture processing remains intact and enhances the comprehension of a speaker’s message
  • Holler, J., Turner, K., & Varcianna, T. (2013). It's on the tip of my fingers: Co-speech gestures during lexical retrieval in different social contexts. Language and Cognitive Processes, 28(10), 1509-1518. doi:10.1080/01690965.2012.698289.

    Abstract

    The Lexical Retrieval Hypothesis proposes that gestures function at the level of speech production, aiding in the retrieval of lexical items from the mental lexicon. However, empirical evidence for this account is mixed, and some critics argue that a more likely function of gestures during lexical retrieval is a communicative one. The present study was designed to test these predictions against each other by keeping lexical retrieval difficulty constant while varying social context. Participants' gestures were analysed during tip of the tongue experiences when communicating with a partner face-to-face (FTF), while being separated by a screen, or on their own by speaking into a voice recorder. The results show that participants in the FTF context produced significantly more representational gestures than participants in the solitary condition. This suggests that, even in the specific context of lexical retrieval difficulties, representational gestures appear to play predominantly a communicative role.

    Files private

    Request files
  • Kaltwasser, L., Ries, S., Sommer, W., Knight, R., & Willems, R. M. (2013). Independence of valence and reward in emotional word processing: Electrophysiological evidence. Frontiers in Psychology, 4: 168. doi:10.3389/fpsyg.2013.00168.

    Abstract

    Both emotion and reward are primary modulators of cognition: Emotional word content enhances word processing, and reward expectancy similarly amplifies cognitive processing from the perceptual up to the executive control level. Here, we investigate how these primary regulators of cognition interact. We studied how the anticipation of gain or loss modulates the neural time course (event-related potentials, ERPs) related to processing of emotional words. Participants performed a semantic categorization task on emotional and neutral words, which were preceded by a cue indicating that performance could lead to monetary gain or loss. Emotion-related and reward-related effects occurred in different time windows, did not interact statistically, and showed different topographies. This speaks for an independence of reward expectancy and the processing of emotional word content. Therefore, privileged processing given to emotionally valenced words seems immune to short-term modulation of reward. Models of language comprehension should be able to incorporate effects of reward and emotion on language processing, and the current study argues for an architecture in which reward and emotion do not share a common neurobiological mechanism
  • Kominsky, J. F., & Casasanto, D. (2013). Specific to whose body? Perspective taking and the spatial mapping of valence. Frontiers in Psychology, 4: 266. doi:10.3389/fpsyg.2013.00266.

    Abstract

    People tend to associate the abstract concepts of “good” and “bad” with their fluent and disfluent sides of space, as determined by their natural handedness or by experimental manipulation (Casasanto, 2011). Here we investigated influences of spatial perspective taking on the spatialization of “good” and “bad.” In the first experiment, participants indicated where a schematically drawn cartoon character would locate “good” and “bad” stimuli. Right-handers tended to assign “good” to the right and “bad” to the left side of egocentric space when the character shared their spatial perspective, but when the character was rotated 180° this spatial mapping was reversed: good was assigned to the character’s right side, not the participant’s. The tendency to spatialize valence from the character’s perspective was stronger in the second experiment, when participants were shown a full-featured photograph of the character. In a third experiment, most participants not only spatialized “good” and “bad” from the character’s perspective, they also based their judgments on a salient attribute of the character’s body (an injured hand) rather than their own body. Taking another’s spatial perspective encourages people to compute space-valence mappings using an allocentric frame of reference, based on the fluency with which the other person could perform motor actions with their right or left hand. When people reason from their own spatial perspective, their judgments depend, in part, on the specifics of their bodies; when people reason from someone else’s perspective, their judgments may depend on the specifics of the other person’s body, instead. - See more at: http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00266
  • Kooijman, V., Junge, C., Johnson, E. K., Hagoort, P., & Cutler, A. (2013). Predictive brain signals of linguistic development. Frontiers in Psychology, 4: 25. doi:10.3389/fpsyg.2013.00025.

    Abstract

    The ability to extract word forms from continuous speech is a prerequisite for constructing a vocabulary and emerges in the first year of life. Electrophysiological (ERP) studies of speech segmentation by 9- to 12-month-old listeners in several languages have found a left-localized negativity linked to word onset as a marker of word detection. We report an ERP study showing significant evidence of speech segmentation in Dutch-learning 7-month-olds. In contrast to the left-localized negative effect reported with older infants, the observed overall mean effect had a positive polarity. Inspection of individual results revealed two participant sub-groups: a majority showing a positive-going response, and a minority showing the left negativity observed in older age groups. We retested participants at age three, on vocabulary comprehension and word and sentence production. On every test, children who at 7 months had shown the negativity associated with segmentation of words from speech outperformed those who had produced positive-going brain responses to the same input. The earlier that infants show the left-localized brain responses typically indicating detection of words in speech, the better their early childhood language skills.
  • Kristensen, L. B., Wang, L., Petersson, K. M., & Hagoort, P. (2013). The interface between language and attention: Prosodic focus marking recruits a general attention network in spoken language comprehension. Cerebral Cortex, 23, 1836-1848. doi:10.1093/cercor/bhs164.

    Abstract

    In spoken language, pitch accent can mark certain information as focus, whereby more attentional resources are allocated to the focused information. Using functional magnetic resonance imaging, this study examined whether pitch accent, used for marking focus, recruited general attention networks during sentence comprehension. In a language task, we independently manipulated the prosody and semantic/pragmatic congruence of sentences. We found that semantic/pragmatic processing affected bilateral inferior and middle frontal gyrus. The prosody manipulation showed bilateral involvement of the superior/inferior parietal cortex, superior and middle temporal cortex, as well as inferior, middle, and posterior parts of the frontal cortex. We compared these regions with attention networks localized in an auditory spatial attention task. Both tasks activated bilateral superior/inferior parietal cortex, superior temporal cortex, and left precentral cortex. Furthermore, an interaction between prosody and congruence was observed in bilateral inferior parietal regions: for incongruent sentences, but not for congruent ones, there was a larger activation if the incongruent word carried a pitch accent, than if it did not. The common activations between the language task and the spatial attention task demonstrate that pitch accent activates a domain general attention network, which is sensitive to semantic/pragmatic aspects of language. Therefore, attention and language comprehension are highly interactive.

    Additional information

    Kirstensen_Cer_Cor_Suppl_Mat.doc
  • Lai, V. T., & Curran, T. (2013). ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors. Brain and Language, 127(3), 484-496. doi:10.1016/j.bandl.2013.09.010.

    Abstract

    Cognitive linguists suggest that understanding metaphors requires activation of conceptual mappings between the involved concepts. We tested whether mappings are indeed in use during metaphor comprehension, and what mapping means as a cognitive process with Event-Related Potentials. Participants read literal, conventional metaphorical, novel metaphorical, and anomalous target sentences preceded by primes with related or unrelated mappings. Experiment 1 used sentence-primes to activate related mappings, and Experiment 2 used simile-primes to induce comparison thinking. In the unprimed conditions of both experiments, metaphors elicited N400s more negative than the literals. In Experiment 1, related sentence-primes reduced the metaphor-literal N400 difference in conventional, but not in novel metaphors. In Experiment 2, related simile-primes reduced the metaphor-literal N400 difference in novel, but not clearly in conventional metaphors. We suggest that mapping as a process occurs in metaphors, and the ways in which it can be facilitated by comparison differ between conventional and novel metaphors.

    Additional information

    Lai_2013_supp.docx Erratum figure 1-4
  • Lai, J., & Poletiek, F. H. (2013). How “small” is “starting small” for learning hierarchical centre-embedded structures? Journal of Cognitive Psychology, 25, 423-435. doi:10.1080/20445911.2013.779247.

    Abstract

    Hierarchical centre-embedded structures pose a large difficulty for language learners due to their complexity. A recent artificial grammar learning study (Lai & Poletiek, 2011) demonstrated a starting-small (SS) effect, i.e., staged-input and sufficient exposure to 0-level-of-embedding exemplars were the critical conditions in learning AnBn structures. The current study aims to test: (1) a more sophisticated type of SS (a gradually rather than discretely growing input), and (2) the frequency distribution of the input. The results indicate that SS optimally works under other conditional cues, such as a skewed frequency distribution with simple stimuli being more numerous than complex ones.
  • Lai, V. T., & Boroditsky, L. (2013). The immediate and chronic influence of spatio-temporal metaphors on the mental representations of time in English, Mandarin, and Mandarin-English speakers. Frontiers in Psychology, 4: 142. doi:10.3389/fpsyg.2013.00142.

    Abstract

    In this paper we examine whether experience with spatial metaphors for time has an influence on people’s representation of time. In particular we ask whether spatiotemporal metaphors can have both chronic and immediate effects on temporal thinking. In Study 1, we examine the prevalence of ego-moving representations for time in Mandarin speakers, English speakers, and Mandarin-English (ME) bilinguals. As predicted by observations in linguistic analyses, we find that Mandarin speakers are less likely to take an ego-moving perspective than are English speakers. Further, we find that ME bilinguals tested in English are less likely to take an ego-moving perspective than are English monolinguals (an effect of L1 on meaning-making in L2), and also that ME bilinguals tested in Mandarin are more likely to take an ego-moving perspective than are Mandarin monolinguals (an effect of L2 on meaning-making in L1). These findings demonstrate that habits of metaphor use in one language can influence temporal reasoning in another language, suggesting the metaphors can have a chronic effect on patterns in thought. In Study 2 we test Mandarin speakers using either horizontal or vertical metaphors in the immediate context of the task. We find that Mandarin speakers are more likely to construct front-back representations of time when understanding front-back metaphors, and more likely to construct up-down representations of time when understanding up-down metaphors. These findings demonstrate that spatiotemporal metaphors can also have an immediate influence on temporal reasoning. Taken together, these findings demonstrate that the metaphors we use to talk about time have both immediate and long-term consequences for how we conceptualize and reason about this fundamental domain of experience.
  • Larson-Prior, L., Oostenveld, R., Della Penna, S., Michalareas, G., Prior, F., Babajani-Feremi, A., Schoffelen, J.-M., Marzetti, L., de Pasquale, F., Pompeo, F. D., Stout, J., Woolrich, M., Luo, Q., Bucholz, R., Fries, P., Pizzella, V., Romani, G., Corbetta, M., & Snyder, A. (2013). Adding dynamics to the Human Connectome Project with MEG. NeuroImage, 80, 190-201. doi:10.1016/j.neuroimage.2013.05.056.

    Abstract

    The Human Connectome Project (HCP) seeks to map the structural and functional connections between network elements in the human brain. Magnetoencephalography (MEG) provides a temporally rich source of information on brain network dynamics and represents one source of functional connectivity data to be provided by the HCP. High quality MEG data will be collected from 50 twin pairs both in the resting state and during performance of motor, working memory and language tasks. These data will be available to the general community. Additionally, using the cortical parcellation scheme common to all imaging modalities, the HCP will provide processing pipelines for calculating connection matrices as a function of time and frequency. Together with structural and functional data generated using magnetic resonance imaging methods, these data represent a unique opportunity to investigate brain network connectivity in a large cohort of normal adult human subjects. The analysis pipeline software and the dynamic connectivity matrices that it generates will all be made freely available to the research community.
  • Lüttjohann, A., Schoffelen, J.-M., & Van Luijtelaar, G. (2013). Peri-ictal network dynamics of spike-wave discharges: Phase and spectral characteristics. Experimental Neurology, 239, 235-247. doi:10.1016/j.expneurol.2012.10.021.

    Abstract

    Purpose The brain is a highly interconnected neuronal assembly in which network analyses can greatly enlarge our knowledge on seizure generation. The cortico-thalamo-cortical network is the brain-network of interest in absence epilepsy. Here, network synchronization is assessed in a genetic absence model during 5 second long pre-ictal- > ictal transition periods. Method 16 male WAG/Rij rats were equipped with multiple electrodes targeting layer 4 to 6 of the somatosensory-cortex, rostral and caudal RTN, VPM, anterior-(ATN) and posterior (Po) thalamic nucleus. Local Field Potentials measured during pre-ictal- > ictal transition and during control periods were subjected to time-frequency and pairwise phase consistency analysis. Results Pre-ictally, all channels showed Spike-Wave Discharge (SWD) precursor activity (increases in spectral power), which were earliest and most pronounced in the somatosensory cortex. The caudal RTN decoupled from VPM, Po and cortical layer 4. Strong increases in synchrony were found between cortex and thalamus during SWD. Although increases between cortex and VPM were seen in SWD frequencies and its harmonics, boarder spectral increases (6-48 Hz) were seen between cortex and Po. All thalamic nuclei showed increased phase synchronization with Po but not with VPM. Conclusion Absence seizures are not sudden and unpredictable phenomena: the somatosensory cortex shows highest and earliest precursor activity. The pre-ictal decoupling of the caudal RTN might be a prerequisite of SWD generation. Po nucleus might be the primary thalamic counterpart to the somatosensory-cortex in the generation of the cortico-thalamic-cortical oscillations referred to as SWD.
  • Mazzone, M., & Campisi, E. (2013). Distributed intentionality: A model of intentional behavior in humans. Philosophical Psychology, 26, 267-290. doi:10.1080/09515089.2011.641743.

    Abstract

    Is human behavior, and more specifically linguistic behavior, intentional? Some scholars have proposed that action is driven in a top-down manner by one single intention—i.e.,one single conscious goal. Others have argued that actions are mostly non-intentional,insofar as often the single goal driving an action is not consciously represented. We intend to claim that both alternatives are unsatisfactory; more specifically, we claim that actions are intentional, but intentionality is distributed across complex goal-directed representations of action, rather than concentrated in single intentions driving action in a top-down manner. These complex representations encompass a multiplicity of goals, together with other components which are not goals themselves, and are the result of a largely automatic dynamic of activation; such an automatic processing, however, does not preclude the involvement of conscious attention, shifting from one component to the other of the overall goal-directed representation.

    Files private

    Request files
  • Meyer, A. S., & Hagoort, P. (2013). What does it mean to predict one's own utterances? [Commentary on Pickering & Garrod]. Behavioral and Brain Sciences, 36, 367-368. doi:10.1017/S0140525X12002786.

    Abstract

    Many authors have recently highlighted the importance of prediction for language comprehension. Pickering & Garrod (P&G) are the first to propose a central role for prediction in language production. This is an intriguing idea, but it is not clear what it means for speakers to predict their own utterances, and how prediction during production can be empirically distinguished from production proper.
  • Minagawa-Kawai, Y., Cristia, A., Long, B., Vendelin, I., Hakuno, Y., Dutat, M., Filippin, L., Cabrol, D., & Dupoux, E. (2013). Insights on NIRS sensitivity from a cross-linguistic study on the emergence of phonological grammar. Frontiers in Psychology, 4: 170. doi:10.3389/fpsyg.2013.00170.

    Abstract

    Each language has a unique set of phonemic categories and phonotactic rules which determine permissible sound sequences in that language. Behavioral research demonstrates that one’s native language shapes the perception of both sound categories and sound sequences in adults, and neuroimaging results further indicate that the processing of native phonemes and phonotactics involves a left-dominant perisylvian brain network. Recent work using a novel technique, functional Near InfraRed Spectroscopy (NIRS), has suggested that a left-dominant network becomes evident toward the end of the first year of life as infants process phonemic contrasts. The present research project attempted to assess whether the same pattern would be seen for native phonotactics. We measured brain responses in Japanese- and French-learning infants to two contrasts: Abuna vs. Abna (a phonotactic contrast that is native in French, but not in Japanese) and Abuna vs. Abuuna (a vowel length contrast that is native in Japanese, but not in French). Results did not show a significant response to either contrast in either group, unlike both previous behavioral research on phonotactic processing and NIRS work on phonemic processing. To understand these null results, we performed similar NIRS experiments with Japanese adult participants. These data suggest that the infant null results arise from an interaction of multiple factors, involving the suitability of the experimental paradigm for NIRS measurements and stimulus perceptibility. We discuss the challenges facing this novel technique, particularly focusing on the optimal stimulus presentation which could yield strong enough hemodynamic responses when using the change detection paradigm.
  • Nieuwenhuis, I. L., Folia, V., Forkstam, C., Jensen, O., & Petersson, K. M. (2013). Sleep promotes the extraction of grammatical rules. PLoS One, 8(6): e65046. doi:10.1371/journal.pone.0065046.

    Abstract

    Grammar acquisition is a high level cognitive function that requires the extraction of complex rules. While it has been proposed that offline time might benefit this type of rule extraction, this remains to be tested. Here, we addressed this question using an artificial grammar learning paradigm. During a short-term memory cover task, eighty-one human participants were exposed to letter sequences generated according to an unknown artificial grammar. Following a time delay of 15 min, 12 h (wake or sleep) or 24 h, participants classified novel test sequences as Grammatical or Non-Grammatical. Previous behavioral and functional neuroimaging work has shown that classification can be guided by two distinct underlying processes: (1) the holistic abstraction of the underlying grammar rules and (2) the detection of sequence chunks that appear at varying frequencies during exposure. Here, we show that classification performance improved after sleep. Moreover, this improvement was due to an enhancement of rule abstraction, while the effect of chunk frequency was unaltered by sleep. These findings suggest that sleep plays a critical role in extracting complex structure from separate but related items during integrative memory processing. Our findings stress the importance of alternating periods of learning with sleep in settings in which complex information must be acquired.
  • Ortega, G., & Ozyurek, A. (2013). Gesture-sign interface in hearing non-signers' first exposure to sign. In Proceedings of the Tilburg Gesture Research Meeting [TiGeR 2013].

    Abstract

    Natural sign languages and gestures are complex communicative systems that allow the incorporation of features of a referent into their structure. They differ, however, in that signs are more conventionalised because they consist of meaningless phonological parameters. There is some evidence that despite non-signers finding iconic signs more memorable they can have more difficulty at articulating their exact phonological components. In the present study, hearing non-signers took part in a sign repetition task in which they had to imitate as accurately as possible a set of iconic and arbitrary signs. Their renditions showed that iconic signs were articulated significantly less accurately than arbitrary signs. Participants were recalled six months later to take part in a sign generation task. In this task, participants were shown the English translation of the iconic signs they imitated six months prior. For each word, participants were asked to generate a sign (i.e., an iconic gesture). The handshapes produced in the sign repetition and sign generation tasks were compared to detect instances in which both renditions presented the same configuration. There was a significant correlation between articulation accuracy in the sign repetition task and handshape overlap. These results suggest some form of gestural interference in the production of iconic signs by hearing non-signers. We also suggest that in some instances non-signers may deploy their own conventionalised gesture when producing some iconic signs. These findings are interpreted as evidence that non-signers process iconic signs as gestures and that in production, only when sign and gesture have overlapping features will they be capable of producing the phonological components of signs accurately.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.

    Abstract

    In everyday communication, people not only use speech but
    also hand gestures to convey information. One intriguing
    question in gesture research has been why gestures take the
    specific form they do. Previous research has identified the
    speaker-gesturer’s communicative intent as one factor
    shaping the form of iconic gestures. Here we investigate
    whether communicative intent also shapes the form of
    pointing gestures. In an experimental setting, twenty-four
    participants produced pointing gestures identifying a referent
    for an addressee. The communicative intent of the speakergesturer
    was manipulated by varying the informativeness of
    the pointing gesture. A second independent variable was the
    presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.
  • Peeters, D., Dijkstra, T., & Grainger, J. (2013). The representation and processing of identical cognates by late bilinguals: RT and ERP effects. Journal of Memory and Language, 68, 315-332. doi:10.1016/j.jml.2012.12.003.

    Abstract

    Across the languages of a bilingual, translation equivalents can have the same orthographic form and shared meaning (e.g., TABLE in French and English). How such words, called orthographically identical cognates, are processed and represented in the bilingual brain is not well understood. In the present study, late French–English bilinguals processed such identical cognates and control words in an English lexical decision task. Both behavioral and electrophysiological data were collected. Reaction times to identical cognates were shorter than for non-cognate controls and depended on both English and French frequency. Cognates with a low English frequency showed a larger cognate advantage than those with a high English frequency. In addition, N400 amplitude was found to be sensitive to cognate status and both the English and French frequency of the cognate words. Theoretical consequences for the processing and representation of identical cognates are discussed.

Share this page