Publications

Displaying 1 - 100 of 130
  • Bakker-Marshall, I., Takashima, A., Schoffelen, J.-M., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2018). Theta-band Oscillations in the Middle Temporal Gyrus Reflect Novel Word Consolidation. Journal of Cognitive Neuroscience, 30(5), 621-633. doi:10.1162/jocn_a_01240.

    Abstract

    Like many other types of memory formation, novel word learning benefits from an offline consolidation period after the initial encoding phase. A previous EEG study has shown that retrieval of novel words elicited more word-like-induced electrophysiological brain activity in the theta band after consolidation [Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of Cognitive Neuroscience, 27, 1286–1297, 2015]. This suggests that theta-band oscillations play a role in lexicalization, but it has not been demonstrated that this effect is directly caused by the formation of lexical representations. This study used magnetoencephalography to localize the theta consolidation effect to the left posterior middle temporal gyrus (pMTG), a region known to be involved in lexical storage. Both untrained novel words and words learned immediately before test elicited lower theta power during retrieval than existing words in this region. After a 24-hr consolidation period, the difference between novel and existing words decreased significantly, most strongly in the left pMTG. The magnitude of the decrease after consolidation correlated with an increase in behavioral competition effects between novel words and existing words with similar spelling, reflecting functional integration into the mental lexicon. These results thus provide new evidence that consolidation aids the development of lexical representations mediated by the left pMTG. Theta synchronization may enable lexical access by facilitating the simultaneous activation of distributed semantic, phonological, and orthographic representations that are bound together in the pMTG.
  • Berkers, R. M. W. J., Ekman, M., van Dongen, E. V., Takashima, A., Barth, M., Paller, K. A., & Fernández, G. (2018). Cued reactivation during slow-wave sleep induces brain connectivity changes related to memory stabilization. Scientific Reports, 8: 16958. doi:10.1038/s41598-018-35287-6.

    Abstract

    Memory reprocessing following acquisition enhances memory consolidation. Specifically, neural activity during encoding is thought to be ‘replayed’ during subsequent slow-wave sleep. Such memory replay is thought to contribute to the functional reorganization of neural memory traces. In particular, memory replay may facilitate the exchange of information across brain regions by inducing a reconfiguration of connectivity across the brain. Memory reactivation can be induced by external cues through a procedure known as “targeted memory reactivation”. Here, we analysed data from a published study with auditory cues used to reactivate visual object-location memories during slow-wave sleep. We characterized effects of memory reactivation on brain network connectivity using graph-theory. We found that cue presentation during slow-wave sleep increased global network integration of occipital cortex, a visual region that was also active during retrieval of object locations. Although cueing did not have an overall beneficial effect on the retention of cued versus uncued associations, individual differences in overnight memory stabilization were related to enhanced network integration of occipital cortex. Furthermore, occipital cortex displayed enhanced connectivity with mnemonic regions, namely the hippocampus, parahippocampal gyrus, thalamus and medial prefrontal cortex during cue sound presentation. Together, these results suggest a neural mechanism where cue-induced replay during sleep increases integration of task-relevant perceptual regions with mnemonic regions. This cross-regional integration may be instrumental for the consolidation and long-term storage of enduring memories.

    Additional information

    41598_2018_35287_MOESM1_ESM.doc
  • Dai, B., Chen, C., Long, Y., Zheng, L., Zhao, H., Bai, X., Liu, W., Zhang, Y., Liu, L., Guo, T., Ding, G., & Lu, C. (2018). Neural mechanisms for selectively tuning into the target speaker in a naturalistic noisy situation. Nature Communications, 9: 2405. doi:10.1038/s41467-018-04819-z.

    Abstract

    The neural mechanism for selectively tuning in to a target speaker while tuning out the others in a multi-speaker situation (i.e., the cocktail-party effect) remains elusive. Here we addressed this issue by measuring brain activity simultaneously from a listener and from multiple speakers while they were involved in naturalistic conversations. Results consistently show selectively enhanced interpersonal neural synchronization (INS) between the listener and the attended speaker at left temporal–parietal junction, compared with that between the listener and the unattended speaker across different multi-speaker situations. Moreover, INS increases significantly prior to the occurrence of verbal responses, and even when the listener’s brain activity precedes that of the speaker. The INS increase is independent of brain-to-speech synchronization in both the anatomical location and frequency range. These findings suggest that INS underlies the selective process in a multi-speaker situation through neural predictions at the content level but not the sensory level of speech.

    Additional information

    Dai_etal_2018_sup.pdf
  • Degand, L., & Van Bergen, G. (2018). Discourse markers as turn-transition devices: Evidence from speech and instant messaging. Discourse Processes, 55, 47-71. doi:10.1080/0163853X.2016.1198136.

    Abstract

    In this article we investigate the relation between discourse markers and turn-transition strategies in face-to-face conversations and Instant Messaging (IM), that is, unplanned, real-time, text-based, computer-mediated communication. By means of a quantitative corpus study of utterances containing a discourse marker, we show that utterance-final discourse markers are used more often in IM than in face-to-face conversations. Moreover, utterance-final discourse markers are shown to occur more often at points of turn-transition compared with points of turn-maintenance in both types of conversation. From our results we conclude that the discourse markers in utterance-final position can function as a turn-transition mechanism, signaling that the turn is over and the floor is open to the hearer. We argue that this linguistic turn-taking strategy is essentially similar in face-to-face and IM communication. Our results add to the evidence that communication in IM is more like speech than like writing.
  • Duarte, R., Uhlmann, M., Van den Broek, D., Fitz, H., Petersson, K. M., & Morrison, A. (2018). Encoding symbolic sequences with spiking neural reservoirs. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN). doi:10.1109/IJCNN.2018.8489114.

    Abstract

    Biologically inspired spiking networks are an important tool to study the nature of computation and cognition in neural systems. In this work, we investigate the representational capacity of spiking networks engaged in an identity mapping task. We compare two schemes for encoding symbolic input, one in which input is injected as a direct current and one where input is delivered as a spatio-temporal spike pattern. We test the ability of networks to discriminate their input as a function of the number of distinct input symbols. We also compare performance using either membrane potentials or filtered spike trains as state variable. Furthermore, we investigate how the circuit behavior depends on the balance between excitation and inhibition, and the degree of synchrony and regularity in its internal dynamics. Finally, we compare different linear methods of decoding population activity onto desired target labels. Overall, our results suggest that even this simple mapping task is strongly influenced by design choices on input encoding, state-variables, circuit characteristics and decoding methods, and these factors can interact in complex ways. This work highlights the importance of constraining computational network models of behavior by available neurobiological evidence.
  • Eekhof, L. S., Eerland, A., & Willems, R. M. (2018). Readers’ insensitivity to tense revealed: No differences in mental simulation during reading of present and past tense stories. Collabra: Psychology, 4(1): 16. doi:10.1525/collabra.121.

    Abstract

    While the importance of mental simulation during literary reading has long been recognized, we know little about the factors that determine when, what, and how much readers mentally simulate. Here we investigate the influence of a specific text characteristic, namely verb tense (present vs. past), on mental simulation during literary reading. Verbs usually denote the actions and events that take place in narratives and hence it is hypothesized that verb tense will influence the amount of mental simulation elicited in readers. Although the present tense is traditionally considered to be more “vivid”, this study is one of the first to experimentally assess this claim. We recorded eye-movements while subjects read stories in the past or present tense and collected data regarding self-reported levels of mental simulation, transportation and appreciation. We found no influence of tense on any of the offline measures. The eye-tracking data showed a slightly more complex pattern. Although we did not find a main effect of sensorimotor simulation content on reading times, we were able to link the degree to which subjects slowed down when reading simulation eliciting content to offline measures of attention and transportation, but this effect did not interact with the tense of the story. Unexpectedly, we found a main effect of tense on reading times per word, with past tense stories eliciting longer first fixation durations and gaze durations. However, we were unable to link this effect to any of the offline measures. In sum, this study suggests that tense does not play a substantial role in the process of mental simulation elicited by literary stories.

    Additional information

    Data Accessibility
  • Eichert, N., Peeters, D., & Hagoort, P. (2018). Language-driven anticipatory eye movements in virtual reality. Behavior Research Methods, 50(3), 1102-1115. doi:10.3758/s13428-017-0929-z.

    Abstract

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. The use of this variant of the visual world paradigm has shown that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional (2D) stimuli that are mere abstractions of real world objects. Here we present a visual world paradigm study in a three-dimensional (3D) immersive virtual reality environment. Despite significant changes in the stimulus material and the different mode of stimulus presentation, language-mediated anticipatory eye movements were observed. These findings thus indicate prediction of upcoming words in language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eye-tracking in rich and multimodal 3D virtual environments.

    Additional information

    13428_2017_929_MOESM1_ESM.docx
  • Ergin, R., Senghas, A., Jackendoff, R., & Gleitman, L. (2018). Structural cues for symmetry, asymmetry, and non-symmetry in Central Taurus Sign Language. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 104-106). Toruń, Poland: NCU Press. doi:10.12775/3991-1.025.
  • Ergin, R., Meir, I., Ilkbasaran, D., Padden, C., & Jackendoff, R. (2018). The Development of Argument Structure in Central Taurus Sign Language. Sign Language & Linguistics, 18(4), 612-639. doi:10.1353/sls.2018.0018.

    Abstract

    One of the fundamental issues for a language is its capacity to express argument structure unambiguously. This study presents evidence for the emergence and the incremental development of these basic mechanisms in a newly developing language, Central Taurus Sign Language. Our analyses identify universal patterns in both the emergence and development of these mechanisms and in languagespecific trajectories.
  • Flecken, M., & Von Stutterheim, C. (2018). Sprache und Kognition: Sprachvergleichende und lernersprachliche Untersuchungen zur Ereigniskonzeptualisierung. In S. Schimke, & H. Hopp (Eds.), Sprachverarbeitung im Zweitspracherwerb (pp. 325-356). Berlin: De Gruyter. doi:10.1515/9783110456356-014.
  • Francisco, A. A., Takashima, A., McQueen, J. M., Van den Bunt, M., Jesse, A., & Groen, M. A. (2018). Adult dyslexic readers benefit less from visual input during audiovisual speech processing: fMRI evidence. Neuropsychologia, 117, 454-471. doi:10.1016/j.neuropsychologia.2018.07.009.

    Abstract

    The aim of the present fMRI study was to investigate whether typical and dyslexic adult readers differed in the neural correlates of audiovisual speech processing. We tested for Blood Oxygen-Level Dependent (BOLD) activity differences between these two groups in a 1-back task, as they processed written (word, illegal consonant strings) and spoken (auditory, visual and audiovisual) stimuli. When processing written stimuli, dyslexic readers showed reduced activity in the supramarginal gyrus, a region suggested to play an important role in phonological processing, but only when they processed strings of consonants, not when they read words. During the speech perception tasks, dyslexic readers were only slower than typical readers in their behavioral responses in the visual speech condition. Additionally, dyslexic readers presented reduced neural activation in the auditory, the visual, and the audiovisual speech conditions. The groups also differed in terms of superadditivity, with dyslexic readers showing decreased neural activation in the regions of interest. An additional analysis focusing on vision-related processing during the audiovisual condition showed diminished activation for the dyslexic readers in a fusiform gyrus cluster. Our results thus suggest that there are differences in audiovisual speech processing between dyslexic and normal readers. These differences might be explained by difficulties in processing the unisensory components of audiovisual speech, more specifically, dyslexic readers may benefit less from visual information during audiovisual speech processing than typical readers. Given that visual speech processing supports the development of phonological skills fundamental in reading, differences in processing of visual speech could contribute to differences in reading ability between typical and dyslexic readers.
  • Franken, M. K. (2018). Listening for speaking: Investigations of the relationship between speech perception and production. PhD Thesis, Radboud University, Nijmegen.

    Abstract

    Speaking and listening are complex tasks that we perform on a daily basis, almost without conscious effort. Interestingly, speaking almost never occurs without listening: whenever we speak, we at least hear our own speech. The research in this thesis is concerned with how the perception of our own speech influences our speaking behavior. We show that unconsciously, we actively monitor this auditory feedback of our own speech. This way, we can efficiently take action and adapt articulation when an error occurs and auditory feedback does not correspond to our expectation. Processing the auditory feedback of our speech does not, however, automatically affect speech production. It is subject to a number of constraints. For example, we do not just track auditory feedback, but also its consistency. If auditory feedback is more consistent over time, it has a stronger influence on speech production. In addition, we investigated how auditory feedback during speech is processed in the brain, using magnetoencephalography (MEG). The results suggest the involvement of a broad cortical network including both auditory and motor-related regions. This is consistent with the view that the auditory center of the brain is involved in comparing auditory feedback to our expectation of auditory feedback. If this comparison yields a mismatch, motor-related regions of the brain can be recruited to alter the ongoing articulations.

    Additional information

    full text via Radboud Repository
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2018). Self-monitoring in the cerebral cortex: Neural responses to pitch-perturbed auditory feedback during speech production. NeuroImage, 179, 326-336. doi:10.1016/j.neuroimage.2018.06.061.

    Abstract

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Psychonomic Bulletin & Review, 25(4), 1458-1467. doi:10.3758/s13423-018-1494-x.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some speakers follow the perturbation. In the current study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. This effect and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation.
  • De Groot, A. M. B., & Hagoort, P. (Eds.). (2018). Research methods in psycholinguistics and the neurobiology of language: A practical guide. Oxford: Wiley.
  • Hagoort, P. (2018). Prerequisites for an evolutionary stance on the neurobiology of language. Current Opinion in Behavioral Sciences, 21, 191-194. doi:10.1016/j.cobeha.2018.05.012.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2018). Infants' sensitivity to rhyme in songs. Infant Behavior and Development, 52, 130-139. doi:10.1016/j.infbeh.2018.07.002.

    Abstract

    Children’s songs often contain rhyming words at phrase endings. In this study, we investigated whether infants can already recognize this phonological pattern in songs. Earlier studies using lists of spoken words were equivocal on infants’ spontaneous processing of rhymes (Hayes, Slater, & Brown, 2000; Jusczyk, Goodman, & Baumann, 1999). Songs, however, constitute an ecologically valid rhyming stimulus, which could allow for spontaneous processing of this phonological pattern in infants. Novel children’s songs with rhyming and non-rhyming lyrics using pseudo-words were presented to 35 9-month-old Dutch infants using the Headturn Preference Procedure. Infants on average listened longer to the non-rhyming songs, with around half of the infants however exhibiting a preference for the rhyming songs. These results highlight that infants have the processing abilities to benefit from their natural rhyming input for the development of their phonological abilities.
  • Hasson, U., Egidi, G., Marelli, M., & Willems, R. M. (2018). Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension. Cognition, 180(1), 135-157. doi:10.1016/j.cognition.2018.06.018.

    Abstract

    Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
  • Hervais-Adelman, A., Egorova, N., & Golestani, N. (2018). Beyond bilingualism: Multilingual experience correlates with caudate volume. Brain Structure and Function, 223(7), 3495-3502. doi:10.1007/s00429-018-1695-0.

    Abstract

    The multilingual brain implements mechanisms that serve to select the appropriate language as a function of the communicative environment. Engaging these mechanisms on a regular basis appears to have consequences for brain structure and function. Studies have implicated the caudate nuclei as important nodes in polyglot language control processes, and have also shown structural differences in the caudate nuclei in bilingual compared to monolingual populations. However, the majority of published work has focused on the categorical differences between monolingual and bilingual individuals, and little is known about whether these findings extend to multilingual individuals, who have even greater language control demands. In the present paper, we present an analysis of the volume and morphology of the caudate nuclei, putamen, pallidum and thalami in 75 multilingual individuals who speak three or more languages. Volumetric analyses revealed a significant relationship between multilingual experience and right caudate volume, as well as a marginally significant relationship with left caudate volume. Vertex-wise analyses revealed a significant enlargement of dorsal and anterior portions of the left caudate nucleus, known to have connectivity with executive brain regions, as a function of multilingual expertise. These results suggest that multilingual expertise might exercise a continuous impact on brain structure, and that as additional languages beyond a second are acquired, the additional demands for linguistic and cognitive control result in modifications to brain structures associated with language management processes.
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2018). Commentary: Broca pars triangularis constitutes a “hub” of the language-control network during simultaneous language translation. Frontiers in Human Neuroscience, 12: 22. doi:10.3389/fnhum.2018.00022.

    Abstract

    A commentary on Broca Pars Triangularis Constitutes a “Hub” of the Language-Control Network during Simultaneous Language Translation by Elmer, S. (2016). Front. Hum. Neurosci. 10:491. doi: 10.3389/fnhum.2016.00491 Elmer (2016) conducted an fMRI investigation of “simultaneous language translation” in five participants. The article presents group and individual analyses of German-to-Italian and Italian-to-German translation, confined to a small set of anatomical regions previously reported to be involved in multilingual control. Here we take the opportunity to discuss concerns regarding certain aspects of the study.
  • Heyselaar, E., Mazaheri, A., Hagoort, P., & Segaert, K. (2018). Changes in alpha activity reveal that social opinion modulates attention allocation during face processing. NeuroImage, 174, 432-440. doi:10.1016/j.neuroimage.2018.03.034.

    Abstract

    Participants’ performance differs when conducting a task in the presence of a secondary individual, moreover the opinion the participant has of this individual also plays a role. Using EEG, we investigated how previous interactions with, and evaluations of, an avatar in virtual reality subsequently influenced attentional allocation to the face of that avatar. We focused on changes in the alpha activity as an index of attentional allocation. We found that the onset of an avatar’s face whom the participant had developed a rapport with induced greater alpha suppression. This suggests greater attentional resources are allocated to the interacted-with avatars. The evaluative ratings of the avatar induced a U-shaped change in alpha suppression, such that participants paid most attention when the avatar was rated as average. These results suggest that attentional allocation is an important element of how behaviour is altered in the presence of a secondary individual and is modulated by our opinion of that individual.

    Additional information

    mmc1.docx
  • Huettig, F., Lachmann, T., Reis, A., & Petersson, K. M. (2018). Distinguishing cause from effect - Many deficits associated with developmental dyslexia may be a consequence of reduced and suboptimal reading experience. Language, Cognition and Neuroscience, 33(3), 333-350. doi:10.1080/23273798.2017.1348528.

    Abstract

    The cause of developmental dyslexia is still unknown despite decades of intense research. Many causal explanations have been proposed, based on the range of impairments displayed by affected individuals. Here we draw attention to the fact that many of these impairments are also shown by illiterate individuals who have not received any or very little reading instruction. We suggest that this fact may not be coincidental and that the performance differences of both illiterates and individuals with dyslexia compared to literate controls are, to a substantial extent, secondary consequences of either reduced or suboptimal reading experience or a combination of both. The search for the primary causes of reading impairments will make progress if the consequences of quantitative and qualitative differences in reading experience are better taken into account and not mistaken for the causes of reading disorders. We close by providing four recommendations for future research.
  • Inacio, F., Faisca, L., Forkstam, C., Araujo, S., Bramao, I., Reis, A., & Petersson, K. M. (2018). Implicit sequence learning is preserved in dyslexic children. Annals of Dyslexia, 68(1), 1-14. doi:10.1007/s11881-018-0158-x.

    Abstract

    This study investigates the implicit sequence learning abilities of dyslexic children using an artificial grammar learning task with an extended exposure period. Twenty children with developmental dyslexia participated in the study and were matched with two control groups—one matched for age and other for reading skills. During 3 days, all participants performed an acquisition task, where they were exposed to colored geometrical forms sequences with an underlying grammatical structure. On the last day, after the acquisition task, participants were tested in a grammaticality classification task. Implicit sequence learning was present in dyslexic children, as well as in both control groups, and no differences between groups were observed. These results suggest that implicit learning deficits per se cannot explain the characteristic reading difficulties of the dyslexics.
  • Jacobs, A. M., & Willems, R. M. (2018). The fictive brain: Neurocognitive correlates of engagement in literature. Review of General Psychology, 22(2), 147-160. doi:10.1037/gpr0000106.

    Abstract

    Fiction is vital to our being. Many people enjoy engaging with fiction every day. Here we focus on literary reading as 1 instance of fiction consumption from a cognitive neuroscience perspective. The brain processes which play a role in the mental construction of fiction worlds and the related engagement with fictional characters, remain largely unknown. The authors discuss the neurocognitive poetics model (Jacobs, 2015a) of literary reading specifying the likely neuronal correlates of several key processes in literary reading, namely inference and situation model building, immersion, mental simulation and imagery, figurative language and style, and the issue of distinguishing fact from fiction. An overview of recent work on these key processes is followed by a discussion of methodological challenges in studying the brain bases of fiction processing
  • Kösem, A., Bosker, H. R., Takashima, A., Meyer, A. S., Jensen, O., & Hagoort, P. (2018). Neural entrainment determines the words we hear. Current Biology, 28, 2867-2875. doi:10.1016/j.cub.2018.07.023.

    Abstract

    Low-frequency neural entrainment to rhythmic input has been hypothesized as a canonical mechanism that shapes sensory perception in time. Neural entrainment is deemed particularly relevant for speech analysis, as it would contribute to the extraction of discrete linguistic elements from continuous acoustic signals. However, its causal influence in speech perception has been difficult to establish. Here, we provide evidence that oscillations build temporal predictions about the duration of speech tokens that affect perception. Using magnetoencephalography (MEG), we studied neural dynamics during listening to sentences that changed in speech rate. Weobserved neural entrainment to preceding speech rhythms persisting for several cycles after the change in rate. The sustained entrainment was associated with changes in the perceived duration of the last word’s vowel, resulting in the perception of words with different meanings. These findings support oscillatory models of speech processing, suggesting that neural oscillations actively shape speech perception.
  • Lam, N. H. L., Hulten, A., Hagoort, P., & Schoffelen, J.-M. (2018). Robust neuronal oscillatory entrainment to speech displays individual variation in lateralisation. Language, Cognition and Neuroscience, 33(8), 943-954. doi:10.1080/23273798.2018.1437456.

    Abstract

    Neural oscillations may be instrumental for the tracking and segmentation of continuous speech. Earlier work has suggested that delta, theta and gamma oscillations entrain to the speech rhythm. We used magnetoencephalography and a large sample of 102 participants to investigate oscillatory entrainment to speech, and observed robust entrainment of delta and theta activity, and weak group-level gamma entrainment. We show that the peak frequency and the hemispheric lateralisation of the entrainment are subject to considerable individual variability. The first finding may support the involvement of intrinsic oscillations in entrainment, and the second finding suggests that there is no systematic default right-hemispheric bias for processing acoustic signals on a slow time scale. Although low frequency entrainment to speech is a robust phenomenon, the characteristics of entrainment vary across individuals, and this variation is important for understanding the underlying neural mechanisms of entrainment, as well as its functional significance.
  • Lewis, A. G., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2018). Assessing the utility of frequency tagging for tracking memory-based reactivation of word representations. Scientific Reports, 8: 7897. doi:10.1038/s41598-018-26091-3.

    Abstract

    Reinstatement of memory-related neural activity measured with high temporal precision potentially provides a useful index for real-time monitoring of the timing of activation of memory content during cognitive processing. The utility of such an index extends to any situation where one is interested in the (relative) timing of activation of different sources of information in memory, a paradigm case of which is tracking lexical activation during language processing. Essential for this approach is that memory reinstatement effects are robust, so that their absence (in the average) definitively indicates that no lexical activation is present. We used electroencephalography to test the robustness of a reported subsequent memory finding involving reinstatement of frequency-specific entrained oscillatory brain activity during subsequent recognition. Participants learned lists of words presented on a background flickering at either 6 or 15 Hz to entrain a steady-state brain response. Target words subsequently presented on a non-flickering background that were correctly identified as previously seen exhibited reinstatement effects at both entrainment frequencies. Reliability of these statistical inferences was however critically dependent on the approach used for multiple comparisons correction. We conclude that effects are not robust enough to be used as a reliable index of lexical activation during language processing.

    Additional information

    Lewis_etal_2018sup.docx
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., Nijhof, A., & Willems, R. M. (2018). The Narrative Brain Dataset (NBD), an fMRI dataset for the study of natural language processing in the brain. In B. Devereux, E. Shutova, & C.-R. Huang (Eds.), Proceedings of LREC 2018 Workshop "Linguistic and Neuro-Cognitive Resources (LiNCR) (pp. 8-11). Paris: LREC.

    Abstract

    We present the Narrative Brain Dataset, an fMRI dataset that was collected during spoken presentation of short excerpts of three stories in Dutch. Together with the brain imaging data, the dataset contains the written versions of the stimulation texts. The texts are accompanied with stochastic (perplexity and entropy) and semantic computational linguistic measures. The richness and unconstrained nature of the data allows the study of language processing in the brain in a more naturalistic setting than is common for fMRI studies. We hope that by making NBD available we serve the double purpose of providing useful neural data to researchers interested in natural language processing in the brain and to further stimulate data sharing in the field of neuroscience of language.
  • Manahova, M. E., Mostert, P., Kok, P., Schoffelen, J.-M., & De Lange, F. P. (2018). Stimulus familiarity and expectation jointly modulate neural activity in the visual ventral stream. Journal of Cognitive Neuroscience, 30(9), 1366-1377. doi:10.1162/jocn_a_01281.

    Abstract

    Prior knowledge about the visual world can change how a visual stimulus is processed. Two forms of prior knowledge are often distinguished: stimulus familiarity (i.e., whether a stimulus has been seen before) and stimulus expectation (i.e., whether a stimulus is expected to occur, based on the context). Neurophysiological studies in monkeys have shown suppression of spiking activity both for expected and for familiar items in object-selective inferotemporal cortex. It is an open question, however, if and how these types of knowledge interact in their modulatory effects on the sensory response. To address this issue and to examine whether previous findings generalize to noninvasively measured neural activity in humans, we separately manipulated stimulus familiarity and expectation while noninvasively recording human brain activity using magnetoencephalography. We observed independent suppression of neural activity by familiarity and expectation, specifically in the lateral occipital complex, the putative human homologue of monkey inferotemporal cortex. Familiarity also led to sharpened response dynamics, which was predominantly observed in early visual cortex. Together, these results show that distinct types of sensory knowledge jointly determine the amount of neural resources dedicated to object processing in the visual ventral stream.
  • Meyer, A. S., Alday, P. M., Decuyper, C., & Knudsen, B. (2018). Working together: Contributions of corpus analyses and experimental psycholinguistics to understanding conversation. Frontiers in Psychology, 9: 525. doi:10.3389/fpsyg.2018.00525.

    Abstract

    As conversation is the most important way of using language, linguists and psychologists should combine forces to investigate how interlocutors deal with the cognitive demands arising during conversation. Linguistic analyses of corpora of conversation are needed to understand the structure of conversations, and experimental work is indispensable for understanding the underlying cognitive processes. We argue that joint consideration of corpus and experimental data is most informative when the utterances elicited in a lab experiment match those extracted from a corpus in relevant ways. This requirement to compare like with like seems obvious but is not trivial to achieve. To illustrate this approach, we report two experiments where responses to polar (yes/no) questions were elicited in the lab and the response latencies were compared to gaps between polar questions and answers in a corpus of conversational speech. We found, as expected, that responses were given faster when they were easy to plan and planning could be initiated earlier than when they were harder to plan and planning was initiated later. Overall, in all but one condition, the latencies were longer than one would expect based on the analyses of corpus data. We discuss the implication of this partial match between the data sets and more generally how corpus and experimental data can best be combined in studies of conversation.

    Additional information

    Data_Sheet_1.pdf
  • Nieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I. and 3 moreNieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I., Kohút, Z., Rueschemeyer, S.-A., & Huettig, F. (2018). Large-scale replication study reveals a limit on probabilistic prediction in language comprehension. eLife, 7: e33468. doi:10.7554/eLife.33468.

    Abstract

    Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words.

    Additional information

    Data sets
  • Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J. T., Oostenveld, R., Schoffelen, J.-M., Tadel, F., Wexler, J., & Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5: 180110. doi:10.1038/sdata.2018.110.

    Abstract

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a principled solution to store, organise, process and share the multidimensional data volumes produced by the modality. The standard also includes well-defined metadata, to facilitate future data harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging community and paves the way to further integration of other techniques in electrophysiology. MEGBIDS builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several dataanalytics software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data resources available to everyone.
  • Palva, J. M., Wang, S. H., Palva, S., Zhigalov, A., Monto, S., Brookes, M. J., & Schoffelen, J.-M. (2018). Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures. NeuroImage, 173, 632-643. doi:10.1016/j.neuroimage.2018.02.032.

    Abstract

    When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed. Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here, however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large numbers of spurious false positive connections through field spread in the vicinity of true interactions. This fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most importantly, beyond defining and illustrating the problem of spurious, or “ghost” interactions, we provide a rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when using measures that are immune to zero-lag correlations.
  • Pascucci, D., Hervais-Adelman, A., & Plomp, G. (2018). Gating by induced A-Gamma asynchrony in selective attention. Human Brain Mapping, 39(10), 3854-3870. doi:10.1002/hbm.24216.

    Abstract

    Visual selective attention operates through top–down mechanisms of signal enhancement and suppression, mediated by a-band oscillations. The effects of such top–down signals on local processing in primary visual cortex (V1) remain poorly understood. In this work, we characterize the interplay between large-s cale interactions and local activity changes in V1 that orchestrat es selective attention, using Granger-causality and phase-amplitude coupling (PAC) analysis of EEG source signals. The task required participants to either attend to or ignore oriented gratings. Results from time-varying, directed connectivity analysis revealed frequency-specific effects of attentional selection: bottom–up g-band influences from visual areas increased rapidly in response to attended stimuli while distributed top–down a-band influences originated from parietal cortex in response to ignored stimuli. Importantly, the results revealed a critical interplay between top–down parietal signals and a–g PAC in visual areas. Parietal a-band influences disrupted the a–g coupling in visual cortex, which in turn reduced the amount of g-band outflow from visual area s. Our results are a first demon stration of how directed interactions affect cross-frequency coupling in downstream areas depending on task demands. These findings suggest that parietal cortex realizes selective attention by disrupting cross-frequency coupling at target regions, which prevents them from propagating task-irrelevant information.
  • Peeters, D. (2018). A standardized set of 3D-objects for virtual reality research and applications. Behavior Research Methods, 50(3), 1047-1054. doi:10.3758/s13428-017-0925-3.

    Abstract

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theory in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3D-objects for virtual reality research is important, as reaching valid theoretical conclusions critically hinges on the use of well controlled experimental stimuli. Sharing standardized 3D-objects across different virtual reality labs will allow for science to move forward more quickly.
  • Peeters, D., & Dijkstra, T. (2018). Sustained inhibition of the native language in bilingual language production: A virtual reality approach. Bilingualism: Language and Cognition, 21(5), 1035-1061. doi:10.1017/S1366728917000396.

    Abstract

    Bilinguals often switch languages as a function of the language background of their addressee. The control mechanisms supporting bilinguals' ability to select the contextually appropriate language are heavily debated. Here we present four experiments in which unbalanced bilinguals named pictures in their first language Dutch and their second language English in mixed and blocked contexts. Immersive virtual reality technology was used to increase the ecological validity of the cued language-switching paradigm. Behaviorally, we consistently observed symmetrical switch costs, reversed language dominance, and asymmetrical mixing costs. These findings indicate that unbalanced bilinguals apply sustained inhibition to their dominant L1 in mixed language settings. Consequent enhanced processing costs for the L1 in a mixed versus a blocked context were reflected by a sustained positive component in event-related potentials. Methodologically, the use of virtual reality opens up a wide range of possibilities to study language and communication in bilingual and other communicative settings.
  • Piai, V., Rommers, J., & Knight, R. T. (2018). Lesion evidence for a critical role of left posterior but not frontal areas in alpha–beta power decreases during context-driven word production. European Journal of Neuroscience, 48(7), 2622-2629. doi:10.1111/ejn.13695.

    Abstract

    Different frequency bands in the electroencephalogram are postulated to support distinct language functions. Studies have suggested that alpha–beta power decreases may index word-retrieval processes. In context-driven word retrieval, participants hear lead-in sentences that either constrain the final word (‘He locked the door with the’) or not (‘She walked in here with the’). The last word is shown as a picture to be named. Previous studies have consistently found alpha–beta power decreases prior to picture onset for constrained relative to unconstrained sentences, localised to the left lateral-temporal and lateral-frontal lobes. However, the relative contribution of temporal versus frontal areas to alpha–beta power decreases is unknown. We recorded the electroencephalogram from patients with stroke lesions encompassing the left lateral-temporal and inferior-parietal regions or left-lateral frontal lobe and from matched controls. Individual participant analyses revealed a behavioural sentence context facilitation effect in all participants, except for in the two patients with extensive lesions to temporal and inferior parietal lobes. We replicated the alpha–beta power decreases prior to picture onset in all participants, except for in the two same patients with extensive posterior lesions. Thus, whereas posterior lesions eliminated the behavioural and oscillatory context effect, frontal lesions did not. Hierarchical clustering analyses of all patients’ lesion profiles, and behavioural and electrophysiological effects identified those two patients as having a unique combination of lesion distribution and context effects. These results indicate a critical role for the left lateral-temporal and inferior parietal lobes, but not frontal cortex, in generating the alpha–beta power decreases underlying context- driven word production.
  • Poletiek, F. H., Conway, C. M., Ellefson, M. R., Lai, J., Bocanegra, B. R., & Christiansen, M. H. (2018). Under what conditions can recursion be learned? Effects of starting small in artificial grammar learning of recursive structure. Cognitive Science, 42(8), 2855-2889. doi:10.1111/cogs.12685.

    Abstract

    It has been suggested that external and/or internal limitations paradoxically may lead to superior learning, that is, the concepts of starting small and less is more (Elman, 1993; Newport, 1990). In this paper, we explore the type of incremental ordering during training that might help learning, and what mechanism explains this facilitation. We report four artificial grammar learning experiments with human participants. In Experiments 1a and 1b we found a beneficial effect of starting small using two types of simple recursive grammars: right‐branching and center‐embedding, with recursive embedded clauses in fixed positions and fixed length. This effect was replicated in Experiment 2 (N = 100). In Experiment 3 and 4, we used a more complex center‐embedded grammar with recursive loops in variable positions, producing strings of variable length. When participants were presented an incremental ordering of training stimuli, as in natural language, they were better able to generalize their knowledge of simple units to more complex units when the training input “grew” according to structural complexity, compared to when it “grew” according to string length. Overall, the results suggest that starting small confers an advantage for learning complex center‐embedded structures when the input is organized according to structural complexity.
  • Popov, T., Jensen, O., & Schoffelen, J.-M. (2018). Dorsal and ventral cortices are coupled by cross-frequency interactions during working memory. NeuroImage, 178, 277-286. doi:10.1016/j.neuroimage.2018.05.054.

    Abstract

    Oscillatory activity in the alpha and gamma bands is considered key in shaping functional brain architecture. Power increases in the high-frequency gamma band are typically reported in parallel to decreases in the low-frequency alpha band. However, their functional significance and in particular their interactions are not well understood. The present study shows that, in the context of an N-backworking memory task, alpha power decreases in the dorsal visual stream are related to gamma power increases in early visual areas. Granger causality analysis revealed directed interregional interactions from dorsal to ventral stream areas, in accordance with task demands. Present results reveal a robust, behaviorally relevant, and architectonically decisive power-to-power relationship between alpha and gamma activity. This relationship suggests that anatomically distant power fluctuations in oscillatory activity can link cerebral network dynamics on trial-by-trial basis during cognitive operations such as working memory
  • Popov, T., Oostenveld, R., & Schoffelen, J.-M. (2018). FieldTrip made easy: An analysis protocol for group analysis of the auditory steady state brain response in time, frequency, and space. Frontiers in Neuroscience, 12: 711. doi:10.3389/fnins.2018.00711.

    Abstract

    The auditory steady state evoked response (ASSR) is a robust and frequently utilized phenomenon in psychophysiological research. It reflects the auditory cortical response to an amplitude-modulated constant carrier frequency signal. The present report provides a concrete example of a group analysis of the EEG data from 29 healthy human participants, recorded during an ASSR paradigm, using the FieldTrip toolbox. First, we demonstrate sensor-level analysis in the time domain, allowing for a description of the event-related potentials (ERPs), as well as their statistical evaluation. Second, frequency analysis is applied to describe the spectral characteristics of the ASSR, followed by group level statistical analysis in the frequency domain. Third, we show how timeand frequency-domain analysis approaches can be combined in order to describe the temporal and spectral development of the ASSR. Finally, we demonstrate source reconstruction techniques to characterize the primary neural generators of the ASSR. Throughout, we pay special attention to explaining the design of the analysis pipeline for single subjects and for the group level analysis. The pipeline presented here can be adjusted to accommodate other experimental paradigms and may serve as a template for similar analyses.
  • Rommers, J., & Federmeier, K. D. (2018). Electrophysiological methods. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 247-265). Hoboken: Wiley.
  • Rommers, J., & Federmeier, K. D. (2018). Lingering expectations: A pseudo-repetition effect for words previously expected but not presented. NeuroImage, 183, 263-272. doi:10.1016/j.neuroimage.2018.08.023.

    Abstract

    Prediction can help support rapid language processing. However, it is unclear whether prediction has downstream consequences, beyond processing in the moment. In particular, when a prediction is disconfirmed, does it linger, or is it suppressed? This study manipulated whether words were actually seen or were only expected, and probed their fate in memory by presenting the words (again) a few sentences later. If disconfirmed predictions linger, subsequent processing of the previously expected (but never presented) word should be similar to actual word repetition. At initial presentation, electrophysiological signatures of prediction disconfirmation demonstrated that participants had formed expectations. Further downstream, relative to unseen words, repeated words elicited a strong N400 decrease, an enhanced late positive complex (LPC), and late alpha band power decreases. Critically, like repeated words, words previously expected but not presented also attenuated the N400. This “pseudorepetition effect” suggests that disconfirmed predictions can linger at some stages of processing, and demonstrates that prediction has downstream consequences beyond rapid on-line processing
  • Rommers, J., & Federmeier, K. D. (2018). Predictability's aftermath: Downstream consequences of word predictability as revealed by repetition effects. Cortex, 101, 16-30. doi:10.1016/j.cortex.2017.12.018.

    Abstract

    Stimulus processing in language and beyond is shaped by context, with predictability having a particularly well-attested influence on the rapid processes that unfold during the presentation of a word. But does predictability also have downstream consequences for the quality of the constructed representations? On the one hand, the ease of processing predictablewordsmight free up time or cognitive resources, allowing for relatively thorough processing of the input. On the other hand, predictabilitymight allowthe systemto run in a top-down “verificationmode”, at the expense of thorough stimulus processing. This electroencephalogram (EEG) study manipulated word predictability, which reduced N400 amplitude and inter-trial phase clustering (ITPC), and then probed the fate of the (un)predictable words in memory by presenting them again. More thorough processing of predictable words should increase repetition effects, whereas less thorough processing should decrease them. Repetition was reflected in N400 decreases, late positive complex (LPC) enhancements, and late alpha/beta band power decreases. Critically, prior predictability tended to reduce the repetition effect on the N400, suggesting less priming, and eliminated the repetition effect on the LPC, suggesting a lack of episodic recollection. These findings converge on a top-down verification account, on which the brain processes more predictable input less thoroughly. More generally, the results demonstrate that predictability hasmultifaceted downstreamconsequences beyond processing in the moment
  • Seeliger, K., Fritsche, M., Güçlü, U., Schoenmakers, S., Schoffelen, J.-M., Bosch, S. E., & Van Gerven, M. A. J. (2018). Convolutional neural network-based encoding and decoding of visual object recognition in space and time. NeuroImage, 180, 253-266. doi:10.1016/j.neuroimage.2017.07.018.

    Abstract

    Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG). Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade was captured by the network layer representations, where the increasingly abstract stimulus representation in the hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out validation set of viewed objects, achieving state-of-the-art decoding accuracy.
  • Segaert, K., Mazaheri, A., & Hagoort, P. (2018). Binding language: Structuring sentences through precisely timed oscillatory mechanisms. European Journal of Neuroscience, 48(7), 2651-2662. doi:10.1111/ejn.13816.

    Abstract

    Syntactic binding refers to combining words into larger structures. Using EEG, we investigated the neural processes involved in syntactic binding. Participants were auditorily presented two-word sentences (i.e. pronoun and pseudoverb such as ‘I grush’, ‘she grushes’, for which syntactic binding can take place) and wordlists (i.e. two pseudoverbs such as ‘pob grush’, ‘pob grushes’, for which no binding occurs). Comparing these two conditions, we targeted syntactic binding while minimizing contributions of semantic binding and of other cognitive processes such as working memory. We found a converging pattern of results using two distinct analysis approaches: one approach using frequency bands as defined in previous literature, and one data-driven approach in which we looked at the entire range of frequencies between 3-30 Hz without the constraints of pre-defined frequency bands. In the syntactic binding (relative to the wordlist) condition, a power increase was observed in the alpha and beta frequency range shortly preceding the presentation of the target word that requires binding, which was maximal over frontal-central electrodes. Our interpretation is that these signatures reflect that language comprehenders expect the need for binding to occur. Following the presentation of the target word in a syntactic binding context (relative to the wordlist condition), an increase in alpha power maximal over a left lateralized cluster of frontal-temporal electrodes was observed. We suggest that this alpha increase relates to syntactic binding taking place. Taken together, our findings suggest that increases in alpha and beta power are reflections of distinct the neural processes underlying syntactic binding.
  • Silva, S., Folia, V., Inácio, F., Castro, S. L., & Petersson, K. M. (2018). Modality effects in implicit artificial grammar learning: An EEG study. Brain Research, 1687, 50-59. doi:10.1016/j.brainres.2018.02.020.

    Abstract

    Recently, it has been proposed that sequence learning engages a combination of modality-specific operating networks and modality-independent computational principles. In the present study, we compared the behavioural and EEG outcomes of implicit artificial grammar learning in the visual vs. auditory modality. We controlled for the influence of surface characteristics of sequences (Associative Chunk Strength), thus focusing on the strictly structural aspects of sequence learning, and we adapted the paradigms to compensate for known frailties of the visual modality compared to audition (temporal presentation, fast presentation rate). The behavioural outcomes were similar across modalities. Favouring the idea of modality-specificity, ERPs in response to grammar violations differed in topography and latency (earlier and more anterior component in the visual modality), and ERPs in response to surface features emerged only in the auditory modality. In favour of modality-independence, we observed three common functional properties in the late ERPs of the two grammars: both were free of interactions between structural and surface influences, both were more extended in a grammaticality classification test than in a preference classification test, and both correlated positively and strongly with theta event-related-synchronization during baseline testing. Our findings support the idea of modality-specificity combined with modality-independence, and suggest that memory for visual vs. auditory sequences may largely contribute to cross-modal differences.
  • Sjerps, M. J., Zhang, C., & Peng, G. (2018). Lexical Tone is Perceived Relative to Locally Surrounding Context, Vowel Quality to Preceding Context. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 914-924. doi:10.1037/xhp0000504.

    Abstract

    Important speech cues such as lexical tone and vowel quality are perceptually contrasted to the distribution of those same cues in surrounding contexts. However, it is unclear whether preceding and following contexts have similar influences, and to what extent those influences are modulated by the auditory history of previous trials. To investigate this, Cantonese participants labeled sounds from (a) a tone continuum (mid- to high-level), presented with a context that had raised or lowered F0 values and (b) a vowel quality continuum (/u/ to /o/), where the context had raised or lowered F1 values. Contexts with high or low F0/F1 were presented in separate blocks or intermixed in 1 block. Contexts were presented following (Experiment 1) or preceding the target continuum (Experiment 2). Contrastive effects were found for both tone and vowel quality (e.g., decreased F0 values in contexts lead to more high tone target judgments and vice versa). Importantly, however, lexical tone was only influenced by F0 in immediately preceding and following contexts. Vowel quality was only influenced by the F1 in preceding contexts, but this extended to contexts from preceding trials. Contextual influences on tone and vowel quality are qualitatively different, which has important implications for understanding the mechanism of context effects in speech perception.
  • Stolk, A., Griffin, S., Van der Meij, R., Dewar, C., Saez, I., Lin, J. J., Piantoni, G., Schoffelen, J.-M., Knight, R. T., & Oostenveld, R. (2018). Integrated analysis of anatomical and electrophysiological human intracranial data. Nature Protocols, 13, 1699-1723. doi:10.1038/s41596-018-0009-6.

    Abstract

    Human intracranial electroencephalography (iEEG) recordings provide data with much greater spatiotemporal precision than is possible from data obtained using scalp EEG, magnetoencephalography (MEG), or functional MRI. Until recently, the fusion of anatomical data (MRI and computed tomography (CT) images) with electrophysiological data and their subsequent analysis have required the use of technologically and conceptually challenging combinations of software. Here, we describe a comprehensive protocol that enables complex raw human iEEG data to be converted into more readily comprehensible illustrative representations. The protocol uses an open-source toolbox for electrophysiological data analysis (FieldTrip). This allows iEEG researchers to build on a continuously growing body of scriptable and reproducible analysis methods that, over the past decade, have been developed and used by a large research community. In this protocol, we describe how to analyze complex iEEG datasets by providing an intuitive and rapid approach that can handle both neuroanatomical information and large electrophysiological datasets. We provide a worked example using an example dataset. We also explain how to automate the protocol and adjust the settings to enable analysis of iEEG datasets with other characteristics. The protocol can be implemented by a graduate student or postdoctoral fellow with minimal MATLAB experience and takes approximately an hour to execute, excluding the automated cortical surface extraction.
  • Tan, Y., & Martin, R. C. (2018). Verbal short-term memory capacities and executive function in semantic and syntactic interference resolution during sentence comprehension: Evidence from aphasia. Neuropsychologia, 113, 111-125. doi:10.1016/j.neuropsychologia.2018.03.001.

    Abstract

    This study examined the role of verbal short-term memory (STM) and executive function (EF) underlying semantic and syntactic interference resolution during sentence comprehension for persons with aphasia (PWA) with varying degrees of STM and EF deficits. Semantic interference was manipulated by varying the semantic plausibility of the intervening NP as subject of the verb and syntactic interference was manipulated by varying whether the NP was another subject or an object. Nine PWA were assessed on sentence reading times and on comprehension question performance. PWA showed exaggerated semantic and syntactic interference effects relative to healthy age-matched control subjects. Importantly, correlational analyses showed that while answering comprehension questions, PWA’ semantic STM capacity related to their ability to resolve semantic but not syntactic interference. In contrast, PWA’ EF abilities related to their ability to resolve syntactic but not semantic interference. Phonological STM deficits were not related to the ability to resolve either type of interference. The results for semantic STM are consistent with prior findings indicating a role for semantic but not phonological STM in sentence comprehension, specifically with regard to maintaining semantic information prior to integration. The results for syntactic interference are consistent with the recent findings suggesting that EF is critical for syntactic processing.
  • Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2018). The combined use of Virtual Reality and EEG to study language processing in naturalistic environments. Behavior Research Methods, 50(2), 862-869. doi:10.3758/s13428-017-0911-9.

    Abstract

    When we comprehend language, we often do this in rich settings in which we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and non-linguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and Virtual Reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant, while wearing EEG equipment. In the restaurant participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g. a plate with salmon). The restaurant guest would then produce a sentence (e.g. “I just ordered this salmon.”). The noun in the spoken sentence could either match (“salmon”) or mismatch (“pasta”) with the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings.
  • Udden, J., & Männel, C. (2018). Artificial grammar learning and its neurobiology in relation to language processing and development. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 755-783). Oxford: Oxford University Press.

    Abstract

    The artificial grammar learning (AGL) paradigm enables systematic investigation of the acquisition of linguistically relevant structures. It is a paradigm of interest for language processing research, interfacing with theoretical linguistics, and for comparative research on language acquisition and evolution. This chapter presents a key for understanding major variants of the paradigm. An unbiased summary of neuroimaging findings of AGL is presented, using meta-analytic methods, pointing to the crucial involvement of the bilateral frontal operculum and regions in the right lateral hemisphere. Against a background of robust posterior temporal cortex involvement in processing complex syntax, the evidence for involvement of the posterior temporal cortex in AGL is reviewed. Infant AGL studies testing for neural substrates are reviewed, covering the acquisition of adjacent and non-adjacent dependencies as well as algebraic rules. The language acquisition data suggest that comparisons of learnability of complex grammars performed with adults may now also be possible with children.
  • Van den Broek, G., Takashima, A., Segers, E., & Verhoeven, L. (2018). Contextual Richness and Word Learning: Context Enhances Comprehension but Retrieval Enhances Retention. Language Learning, 68(2), 546-585. doi:10.1111/lang.12285.

    Abstract

    Learning new vocabulary from context typically requires multiple encounters during which word meaning can be retrieved from memory or inferred from context. We compared the effect of memory retrieval and context inferences on short‐ and long‐term retention in three experiments. Participants studied novel words and then practiced the words either in an uninformative context that required the retrieval of word meaning from memory (“I need the funguo”) or in an informative context from which word meaning could be inferred (“I want to unlock the door: I need the funguo”). The informative context facilitated word comprehension during practice. However, later recall of word form and meaning and word recognition in a new context were better after successful retrieval practice and retrieval practice with feedback than after context‐inference practice. These findings suggest benefits of retrieval during contextualized vocabulary learning whereby the uninformative context enhanced word retention by triggering memory retrieval.
  • Van Bergen, G., & Bosker, H. R. (2018). Linguistic expectation management in online discourse processing: An investigation of Dutch inderdaad 'indeed' and eigenlijk 'actually'. Journal of Memory and Language, 103, 191-209. doi:10.1016/j.jml.2018.08.004.

    Abstract

    Interpersonal discourse particles (DPs), such as Dutch inderdaad (≈‘indeed’) and eigenlijk (≈‘actually’) are highly frequent in everyday conversational interaction. Despite extensive theoretical descriptions of their polyfunctionality, little is known about how they are used by language comprehenders. In two visual world eye-tracking experiments involving an online dialogue completion task, we asked to what extent inderdaad, confirming an inferred expectation, and eigenlijk, contrasting with an inferred expectation, influence real-time understanding of dialogues. Answers in the dialogues contained a DP or a control adverb, and a critical discourse referent was replaced by a beep; participants chose the most likely dialogue completion by clicking on one of four referents in a display. Results show that listeners make rapid and fine-grained situation-specific inferences about the use of DPs, modulating their expectations about how the dialogue will unfold. Findings further specify and constrain theories about the conversation-managing function and polyfunctionality of DPs.
  • Van Campen, A. D., Kunert, R., Van den Wildenberg, W. P. M., & Ridderinkhof, K. R. (2018). Repetitive transcranial magnetic stimulation over inferior frontal cortex impairs the suppression (but not expression) of action impulses during action conflict. Psychophysiology, 55(3): e13003. doi:10.1111/psyp.13003.

    Abstract

    In the recent literature, the effects of noninvasive neurostimulation on cognitive functioning appear to lack consistency and replicability. We propose that such effects may be concealed unless dedicated, sensitive, and process-specific dependent measures are used. The expression and subsequent suppression of response capture are often studied using conflict tasks. Response-time distribution analyses have been argued to provide specific measures of the susceptibility to make fast impulsive response errors, as well as the proficiency of the selective suppression of these impulses. These measures of response capture and response inhibition are particularly sensitive to experimental manipulations and clinical deficiencies that are typically obfuscated in commonly used overall performance analyses. Recent work using structural and functional imaging techniques links these behavioral outcome measures to the integrity of frontostriatal networks. These studies suggest that the presupplementary motor area (pre-SMA) is linked to the susceptibility to response capture whereas the right inferior frontal cortex (rIFC) is associated with the selective suppression of action impulses. Here, we used repetitive transcranial magnetic stimulation (rTMS) to test the causal involvement of these two cortical areas in response capture and inhibition in the Simon task. Disruption of rIFC function specifically impaired selective suppression of conflicting action tendencies, whereas the anticipated increase of fast impulsive errors after perturbing pre-SMA function was not confirmed. These results provide a proof of principle of the notion that the selection of appropriate dependent measures is perhaps crucial to establish the effects of neurostimulation on specific cognitive functions.
  • Vanlangendonck, F., Takashima, A., Willems, R. M., & Hagoort, P. (2018). Distinguishable memory retrieval networks for collaboratively and non-collaboratively learned information. Neuropsychologia, 111, 123-132. doi:10.1016/j.neuropsychologia.2017.12.008.

    Abstract

    Learning often occurs in communicative and collaborative settings, yet almost all research into the neural basis of memory relies on participants encoding and retrieving information on their own. We investigated whether learning linguistic labels in a collaborative context at least partly relies on cognitively and neurally distinct representations, as compared to learning in an individual context. Healthy human participants learned labels for sets of abstract shapes in three different tasks. They came up with labels with another person in a collaborative communication task (collaborative condition), by themselves (individual condition), or were given pre-determined unrelated labels to learn by themselves (arbitrary condition). Immediately after learning, participants retrieved and produced the labels aloud during a communicative task in the MRI scanner. The fMRI results show that the retrieval of collaboratively generated labels as compared to individually learned labels engages brain regions involved in understanding others (mentalizing or theory of mind) and autobiographical memory, including the medial prefrontal cortex, the right temporoparietal junction and the precuneus. This study is the first to show that collaboration during encoding affects the neural networks involved in retrieval.
  • Vanlangendonck, F., Willems, R. M., & Hagoort, P. (2018). Taking common ground into account: Specifying the role of the mentalizing network in communicative language production. PLoS One, 13(10): e0202943. doi:10.1371/journal.pone.0202943.
  • Varma, S., Daselaar, S. M., Kessels, R. P. C., & Takashima, A. (2018). Promotion and suppression of autobiographical thinking differentially affect episodic memory consolidation. PLoS One, 13(8): e0201780. doi:10.1371/journal.pone.0201780.

    Abstract

    During a post-encoding delay period, the ongoing consolidation of recently acquired memories can suffer interference if the delay period involves encoding of new memories, or sensory stimulation tasks. Interestingly, two recent independent studies suggest that (i) autobiographical thinking also interferes markedly with ongoing consolidation of recently learned wordlist material, while (ii) a 2-Back task might not interfere with ongoing consolidation, possibly due to the suppression of autobiographical thinking. In this study, we directly compare these conditions against a quiet wakeful rest baseline to test whether the promotion (via familiar sound-cues) or suppression (via a 2-Back task) of autobiographical thinking during the post-encoding delay period can affect consolidation of studied wordlists in a negative or a positive way, respectively. Our results successfully replicate previous studies and show a significant interference effect (as compared to the rest condition) when learning is followed by familiar sound-cues that promote autobiographical thinking, whereas no interference effect is observed when learning is followed by the 2-Back task. Results from a post-experimental experience-sampling questionnaire further show significant differences in the degree of autobiographical thinking reported during the three post-encoding periods: highest in the presence of sound-cues and lowest during the 2-Back task. In conclusion, our results suggest that varying levels of autobiographical thought during the post-encoding period may modulate episodic memory consolidation.
  • Wang, L., Hagoort, P., & Jensen, O. (2018). Gamma oscillatory activity related to language prediction. Journal of Cognitive Neuroscience, 30(8), 1075-1085. doi:10.1162/jocn_a_01275.

    Abstract

    Using magnetoencephalography, the current study examined gamma activity associated with language prediction. Participants read high- and low-constraining sentences in which the final word of the sentence was either expected or unexpected. Although no consistent gamma power difference induced by the sentence-final words was found between the expected and unexpected conditions, the correlation of gamma power during the prediction and activation intervals of the sentence-final words was larger when the presented words matched with the prediction compared with when the prediction was violated or when no prediction was available. This suggests that gamma magnitude relates to the match between predicted and perceived words. Moreover, the expected words induced activity with a slower gamma frequency compared with that induced by unexpected words. Overall, the current study establishes that prediction is related to gamma power correlations and a slowing of the gamma frequency.
  • Wang, L., Hagoort, P., & Jensen, O. (2018). Language prediction is reflected by coupling between frontal gamma and posterior alpha oscillations. Journal of Cognitive Neuroscience, 30(3), 432-447. doi:10.1162/jocn_a_01190.

    Abstract

    Readers and listeners actively predict upcoming words during language processing. These predictions might serve to support the unification of incoming words into sentence context and thus rely on interactions between areas in the language network. In the current magnetoencephalography study, participants read sentences that varied in contextual constraints so that the predictability of the sentence-final words was either high or low. Before the sentence-final words, we observed stronger alpha power suppression for the highly compared with low constraining sentences in the left inferior frontal cortex, left posterior temporal region, and visual word form area. Importantly, the temporal and visual word form area alpha power correlated negatively with left frontal gamma power for the highly constraining sentences. We suggest that the correlation between alpha power decrease in temporal language areas and left prefrontal gamma power reflects the initiation of an anticipatory unification process in the language network.
  • Willems, R. M., & Van Gerven, M. (2018). New fMRI methods for the study of language. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 975-991). Oxford: Oxford University Press.
  • Willems, R. M., & Cristia, A. (2018). Hemodynamic methods: fMRI and fNIRS. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 266-287). Hoboken: Wiley.
  • Adank, P., Hagoort, P., & Bekkering, H. (2010). Imitation improves language comprehension. Psychological Science, 21, 1903-1909. doi:10.1177/0956797610389192.

    Abstract

    Humans imitate each other during social interaction. This imitative behavior streamlines social interaction and aids in learning to replicate actions. However, the effect of imitation on action comprehension is unclear. This study investigated whether vocal imitation of an unfamiliar accent improved spoken-language comprehension. Following a pretraining accent comprehension test, participants were assigned to one of six groups. The baseline group received no training, but participants in the other five groups listened to accented sentences, listened to and repeated accented sentences in their own accent, listened to and transcribed accented sentences, listened to and imitated accented sentences, or listened to and imitated accented sentences without being able to hear their own vocalizations. Posttraining measures showed that accent comprehension was most improved for participants who imitated the speaker’s accent. These results show that imitation may aid in streamlining interaction by improving spoken-language comprehension under adverse listening conditions.
  • Andics, A., McQueen, J. M., Petersson, K. M., Gál, V., Rudas, G., & Vidnyánszky, Z. (2010). Neural mechanisms for voice recognition. NeuroImage, 52, 1528-1540. doi:10.1016/j.neuroimage.2010.05.048.

    Abstract

    We investigated neural mechanisms that support voice recognition in a training paradigm with fMRI. The same listeners were trained on different weeks to categorize the mid-regions of voice-morph continua as an individual's voice. Stimuli implicitly defined a voice-acoustics space, and training explicitly defined a voice-identity space. The predefined centre of the voice category was shifted from the acoustic centre each week in opposite directions, so the same stimuli had different training histories on different tests. Cortical sensitivity to voice similarity appeared over different time-scales and at different representational stages. First, there were short-term adaptation effects: Increasing acoustic similarity to the directly preceding stimulus led to haemodynamic response reduction in the middle/posterior STS and in right ventrolateral prefrontal regions. Second, there were longer-term effects: Response reduction was found in the orbital/insular cortex for stimuli that were most versus least similar to the acoustic mean of all preceding stimuli, and, in the anterior temporal pole, the deep posterior STS and the amygdala, for stimuli that were most versus least similar to the trained voice-identity category mean. These findings are interpreted as effects of neural sharpening of long-term stored typical acoustic and category-internal values. The analyses also reveal anatomically separable voice representations: one in a voice-acoustics space and one in a voice-identity space. Voice-identity representations flexibly followed the trained identity shift, and listeners with a greater identity effect were more accurate at recognizing familiar voices. Voice recognition is thus supported by neural voice spaces that are organized around flexible ‘mean voice’ representations.
  • Araújo, S., Pacheco, A., Faísca, L., Petersson, K. M., & Reis, A. (2010). Visual rapid naming and phonological abilities: Different subtypes in dyslexic children. International Journal of Psychology, 45, 443-452. doi:10.1080/00207594.2010.499949.

    Abstract

    One implication of the double-deficit hypothesis for dyslexia is that there should be subtypes of dyslexic readers that exhibit rapid naming deficits with or without concomitant phonological processing problems. In the current study, we investigated the validity of this hypothesis for Portuguese orthography, which is more consistent than English orthography, by exploring different cognitive profiles in a sample of dyslexic children. In particular, we were interested in identifying readers characterized by a pure rapid automatized naming deficit. We also examined whether rapid naming and phonological awareness independently account for individual differences in reading performance. We characterized the performance of dyslexic readers and a control group of normal readers matched for age on reading, visual rapid naming and phonological processing tasks. Our results suggest that there is a subgroup of dyslexic readers with intact phonological processing capacity (in terms of both accuracy and speed measures) but poor rapid naming skills. We also provide evidence for an independent association between rapid naming and reading competence in the dyslexic sample, when the effect of phonological skills was controlled. Altogether, the results are more consistent with the view that rapid naming problems in dyslexia represent a second core deficit rather than an exclusive phonological explanation for the rapid naming deficits. Furthermore, additional non-phonological processes, which subserve rapid naming performance, contribute independently to reading development.
  • Baggio, G., Choma, T., Van Lambalgen, M., & Hagoort, P. (2010). Coercion and compositionality. Journal of Cognitive Neuroscience, 22, 2131-2140. doi:10.1162/jocn.2009.21303.

    Abstract

    Research in psycholinguistics and in the cognitive neuroscience of language has suggested that semantic and syntactic integration are associated with different neurophysiologic correlates, such as the N400 and the P600 in the ERPs. However, only a handful of studies have investigated the neural basis of the syntax–semantics interface, and even fewer experiments have dealt with the cases in which semantic composition can proceed independently of the syntax. Here we looked into one such case—complement coercion—using ERPs. We compared sentences such as, “The journalist wrote the article” with “The journalist began the article.” The second sentence seems to involve a silent semantic element, which is expressed in the first sentence by the head of the VP “wrote the article.” The second type of construction may therefore require the reader to infer or recover from memory a richer event sense of the VP “began the article,” such as began writing the article, and to integrate that into a semantic representation of the sentence. This operation is referred to as “complement coercion.” Consistently with earlier reading time, eye tracking, and MEG studies, we found traces of such additional computations in the ERPs: Coercion gives rise to a long-lasting negative shift, which differs at least in duration from a standard N400 effect. Issues regarding the nature of the computation involved are discussed in the light of a neurocognitive model of language processing and a formal semantic analysis of coercion.
  • Bastiaansen, M. C. M., Magyari, L., & Hagoort, P. (2010). Syntactic unification operations are reflected in oscillatory dynamics during on-line sentence comprehension. Journal of Cognitive Neuroscience, 22, 1333-1347. doi:10.1162/jocn.2009.21283.

    Abstract

    There is growing evidence suggesting that synchronization changes in the oscillatory neuronal dynamics in the EEG or MEG reflect the transient coupling and uncoupling of functional networks related to different aspects of language comprehension. In this work, we examine how sentence-level syntactic unification operations are reflected in the oscillatory dynamics of the MEG. Participants read sentences that were either correct, contained a word category violation, or were constituted of random word sequences devoid of syntactic structure. A time-frequency analysis of MEG power changes revealed three types of effects. The first type of effect was related to the detection of a (word category) violation in a syntactically structured sentence, and was found in the alpha and gamma frequency bands. A second type of effect was maximally sensitive to the syntactic manipulations: A linear increase in beta power across the sentence was present for correct sentences, was disrupted upon the occurrence of a word category violation, and was absent in syntactically unstructured random word sequences. We therefore relate this effect to syntactic unification operations. Thirdly, we observed a linear increase in theta power across the sentence for all syntactically structured sentences. The effects are tentatively related to the building of a working memory trace of the linguistic input. In conclusion, the data seem to suggest that syntactic unification is reflected by neuronal synchronization in the lower-beta frequency band.
  • Bottini, R., & Casasanto, D. (2010). Implicit spatial length modulates time estimates, but not vice versa. In C. Hölscher, T. F. Shipley, M. Olivetti Belardinelli, J. A. Bateman, & N. Newcombe (Eds.), Spatial Cognition VII. International Conference, Spatial Cognition 2010, Mt. Hood/Portland, OR, USA, August 15-19, 2010. Proceedings (pp. 152-162). Berlin Heidelberg: Springer.

    Abstract

    How are space and time represented in the human mind? Here we evaluate two theoretical proposals, one suggesting a symmetric relationship between space and time (ATOM theory) and the other an asymmetric relationship (metaphor theory). In Experiment 1, Dutch-speakers saw 7-letter nouns that named concrete objects of various spatial lengths (tr. pencil, bench, footpath) and estimated how much time they remained on the screen. In Experiment 2, participants saw nouns naming temporal events of various durations (tr. blink, party, season) and estimated the words’ spatial length. Nouns that named short objects were judged to remain on the screen for a shorter time, and nouns that named longer objects to remain for a longer time. By contrast, variations in the duration of the event nouns’ referents had no effect on judgments of the words’ spatial length. This asymmetric pattern of cross-dimensional interference supports metaphor theory and challenges ATOM.
  • Bottini, R., & Casasanto, D. (2010). Implicit spatial length modulates time estimates, but not vice versa. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 1348-1353). Austin, TX: Cognitive Science Society.

    Abstract

    Why do people accommodate to each other’s linguistic behavior? Studies of natural interactions (Giles, Taylor & Bourhis, 1973) suggest that speakers accommodate to achieve interactional goals, influencing what their interlocutor thinks or feels about them. But is this the only reason speakers accommodate? In real-world conversations, interactional motivations are ubiquitous, making it difficult to assess the extent to which they drive accommodation. Do speakers still accommodate even when interactional goals cannot be achieved, for instance, when their interlocutor cannot interpret their accommodation behavior? To find out, we asked participants to enter an immersive virtual reality (VR) environment and to converse with a virtual interlocutor. Participants accommodated to the speech rate of their virtual interlocutor even though he could not interpret their linguistic behavior, and thus accommodation could not possibly help them to achieve interactional goals. Results show that accommodation does not require explicit interactional goals, and suggest other social motivations for accommodation.
  • Bramão, I., Faísca, L., Forkstam, C., Reis, A., & Petersson, K. M. (2010). Cortical brain regions associated with color processing: An FMRI study. The Open Neuroimaging Journal, 4, 164-173. doi:10.2174/1874440001004010164.

    Abstract

    To clarify whether the neural pathways concerning color processing are the same for natural objects, for artifacts objects and for non-sense objects we examined functional magnetic resonance imaging (FMRI) responses during a covert naming task including the factors color (color vs. black&white (B&W)) and stimulus type (natural vs. artifacts vs. non-sense objects). Our results indicate that the superior parietal lobule and precuneus (BA 7) bilaterally, the right hippocampus and the right fusifom gyrus (V4) make part of a network responsible for color processing both for natural and artifacts objects, but not for non-sense objects. The recognition of non-sense colored objects compared to the recognition of color objects activated the posterior cingulate/precuneus (BA 7/23/31), suggesting that color attribute induces the mental operation of trying to associate a non-sense composition with a familiar objects. When color objects (both natural and artifacts) were contrasted with color nonobjects we observed activations in the right parahippocampal gyrus (BA 35/36), the superior parietal lobule (BA 7) bilaterally, the left inferior middle temporal region (BA 20/21) and the inferior and superior frontal regions (BA 10/11/47). These additional activations suggest that colored objects recruit brain regions that are related to visual semantic information/retrieval and brain regions related to visuo-spatial processing. Overall, the results suggest that color information is an attribute that improve object recognition (based on behavioral results) and activate a specific neural network related to visual semantic information that is more extensive than for B&W objects during object recognition
  • Bramão, I., Faísca, L., Petersson, K. M., & Reis, A. (2010). The influence of surface color information and color knowledge information in object recognition. American Journal of Psychology, 123, 437-466. Retrieved from http://www.jstor.org/stable/10.5406/amerjpsyc.123.4.0437.

    Abstract

    In order to clarify whether the influence of color knowledge information in object recognition depends on the presence of the appropriate surface color, we designed a name—object verification task. The relationship between color and shape information provided by the name and by the object photo was manipulated in order to assess color interference independently of shape interference. We tested three different versions for each object: typically colored, black and white, and nontypically colored. The response times on the nonmatching trials were used to measure the interference between the name and the photo. We predicted that the more similar the name and the photo are, the longer it would take to respond. Overall, the color similarity effect disappeared in the black-and-white and nontypical color conditions, suggesting that the influence of color knowledge on object recognition depends on the presence of the appropriate surface color information.
  • Brookshire, G., Casasanto, D., & Ivry, R. (2010). Modulation of motor-meaning congruity effects for valenced words. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (CogSci 2010) (pp. 1940-1945). Austin, TX: Cognitive Science Society.

    Abstract

    We investigated the extent to which emotionally valenced words automatically cue spatio-motor representations. Participants made speeded button presses, moving their hand upward or downward while viewing words with positive or negative valence. Only the color of the words was relevant to the response; on target trials, there was no requirement to read the words or process their meaning. In Experiment 1, upward responses were faster for positive words, and downward for negative words. This effect was extinguished, however, when words were repeated. In Experiment 2, participants performed the same primary task with the addition of distractor trials. Distractors either oriented attention toward the words’ meaning or toward their color. Congruity effects were increased with orientation to meaning, but eliminated with orientation to color. When people read words with emotional valence, vertical spatio-motor representations are activated highly automatically, but this automaticity is modulated by repetition and by attentional orientation to the words’ form or meaning.
  • Brouwer, H., Fitz, H., & Hoeks, J. C. (2010). Modeling the noun phrase versus sentence coordination ambiguity in Dutch: Evidence from Surprisal Theory. In Proceedings of the 2010 Workshop on Cognitive Modeling and Computational Linguistics, ACL 2010 (pp. 72-80). Association for Computational Linguistics.

    Abstract

    This paper investigates whether surprisal theory can account for differential processing difficulty in the NP-/S-coordination ambiguity in Dutch. Surprisal is estimated using a Probabilistic Context-Free Grammar (PCFG), which is induced from an automatically annotated corpus. We find that our lexicalized surprisal model can account for the reading time data from a classic experiment on this ambiguity by Frazier (1987). We argue that syntactic and lexical probabilities, as specified in a PCFG, are sufficient to account for what is commonly referred to as an NP-coordination preference.
  • Casasanto, D., & Bottini, R. (2010). Can mirror-reading reverse the flow of time? In C. Hölscher, T. F. Shipley, M. Olivetti Belardinelli, J. A. Bateman, & N. S. Newcombe (Eds.), Spatial Cognition VII. International Conference, Spatial Cognition 2010, Mt. Hood/Portland, OR, USA, August 15-19, 2010. Proceedings (pp. 335-345). Berlin Heidelberg: Springer.

    Abstract

    Across cultures, people conceptualize time as if it flows along a horizontal timeline, but the direction of this implicit timeline is culture-specific: in cultures with left-to-right orthography (e.g., English-speaking cultures) time appears to flow rightward, but in cultures with right-to-left orthography (e.g., Arabic-speaking cultures) time flows leftward. Can orthography influence implicit time representations independent of other cultural and linguistic factors? Native Dutch speakers performed a space-time congruity task with the instructions and stimuli written in either standard Dutch or mirror-reversed Dutch. Participants in the Standard Dutch condition were fastest to judge past-oriented phrases by pressing the left button and future-oriented phrases by pressing the right button. Participants in the Mirror-Reversed Dutch condition showed the opposite pattern of reaction times, consistent with results found previously in native Arabic and Hebrew speakers. These results demonstrate a causal role for writing direction in shaping implicit mental representations of time.
  • Casasanto, D., & Bottini, R. (2010). Can mirror-reading reverse the flow of time? In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (CogSci 2010) (pp. 1342-1347). Austin, TX: Cognitive Science Society.

    Abstract

    Across cultures, people conceptualize time as if it flows along a horizontal timeline, but the direction of this implicit timeline is culture-specific: in cultures with left-to-right orthography (e.g., English-speaking cultures) time appears to flow rightward, but in cultures with right-to-left orthography (e.g., Arabic-speaking cultures) time flows leftward. Can orthography influence implicit time representations independent of other cultural and linguistic factors? Native Dutch speakers performed a space-time congruity task with the instructions and stimuli written in either standard Dutch or mirror-reversed Dutch. Participants in the Standard Dutch condition were fastest to judge past-oriented phrases by pressing the left button and future-oriented phrases by pressing the right button. Participants in the Mirror-Reversed Dutch condition showed the opposite pattern of reaction times, consistent with results found previously in native Arabic and Hebrew speakers. These results demonstrate a causal role for writing direction in shaping implicit mental representations of time.
  • Casasanto, D. (2010). En qué casos una metáfora lingüística constituye una metáfora conceptual? In D. Pérez, S. Español, L. Skidelsky, & R. Minervino (Eds.), Conceptos: Debates contemporáneos en filosofía y psicología. Buenos Airos: Catálogos.
  • Casasanto, D., & Jasmin, K. (2010). Good and bad in the hands of politicians: Spontaneous gestures during positive and negative speech. PLoS ONE, 5(7), E11805. doi:10.1371/journal.pone.0011805.

    Abstract

    According to the body-specificity hypothesis, people with different bodily characteristics should form correspondingly different mental representations, even in highly abstract conceptual domains. In a previous test of this proposal, right- and left-handers were found to associate positive ideas like intelligence, attractiveness, and honesty with their dominant side and negative ideas with their non-dominant side. The goal of the present study was to determine whether ‘body-specific’ associations of space and valence can be observed beyond the laboratory in spontaneous behavior, and whether these implicit associations have visible consequences.
  • Casasanto, D., & Jasmin, K. (2010). Good and bad in the hands of politicians: Spontaneous gestures during positive and negative speech [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 137). York: University of York.
  • Casasanto, D., & Bottini, R. (2010). Mirror-reading can reverse the flow of time [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 57). York: University of York.
  • Casasanto, D., & Dijkstra, K. (2010). Motor action and emotional memory. Cognition, 115, 179-185. doi:10.1016/j.cognition.2009.11.002.

    Abstract

    Can simple motor actions affect how efficiently people retrieve emotional memories, and influence what they choose to remember? In Experiment 1, participants were prompted to retell autobiographical memories with either positive or negative valence, while moving marbles either upward or downward. They retrieved memories faster when the direction of movement was congruent with the valence of the memory (upward for positive, downward for negative memories). Given neutral-valence prompts in Experiment 2, participants retrieved more positive memories when instructed to move marbles up, and more negative memories when instructed to move them down, demonstrating a causal link from motion to emotion. Results suggest that positive and negative life experiences are implicitly associated with schematic representations of upward and downward motion, consistent with theories of metaphorical mental representation. Beyond influencing the efficiency of memory retrieval, the direction of irrelevant, repetitive motor actions can also partly determine the emotional content of the memories people retrieve: moving marbles upward (an ostensibly meaningless action) can cause people to think more positive thoughts.
  • Casasanto, D., Fotakopoulou, O., & Boroditsky, L. (2010). Space and time in the child's mind: Evidence for a cross-dimensional asymmetry. Cognitive Science, 34, 387 -405. doi:10.1111/j.1551-6709.2010.01094.x.

    Abstract

    What is the relationship between space and time in the human mind? Studies in adults show an asymmetric relationship between mental representations of these basic dimensions of experience: Representations of time depend on space more than representations of space depend on time. Here we investigated the relationship between space and time in the developing mind. Native Greek-speaking children watched movies of two animals traveling along parallel paths for different distances or durations and judged the spatial and temporal aspects of these events (e.g., Which animal went for a longer distance, or a longer time?). Results showed a reliable cross-dimensional asymmetry. For the same stimuli, spatial information influenced temporal judgments more than temporal information influenced spatial judgments. This pattern was robust to variations in the age of the participants and the type of linguistic framing used to elicit responses. This finding demonstrates a continuity between space-time representations in children and adults, and informs theories of analog magnitude representation.
  • Casasanto, D. (2010). Wie der Körper Sprache und Vorstellungsvermögen im Gehirn formt. In Max-Planck-Gesellschaft. Jahrbuch 2010. München: Max-Planck-Gesellschaft. Retrieved from http://www.mpg.de/jahrbuch/forschungsbericht?obj=454607.

    Abstract

    Wenn unsere geistigen Fähigkeiten zum Teil von der Struktur unserer Körper abhängen, dann sollten Menschen mit unterschiedlichen Körpertypen unterschiedlich denken. Um dies zu überprüfen, haben Wissenschaftler des MPI für Psycholinguistik neurale Korrelate von Sprachverstehen und motorischen Vorstellungen untersucht, die durch Aktionsverben hervorgerufen werden. Diese Verben bezeichnen Handlungen, die Menschen zumeist mit ihrer dominanten Hand ausführen (z. B. schreiben, werfen). Das Verstehen dieser Verben sowie die Vorstellung entsprechender motorischer Handlungen wurde in Gehirnen von Rechts- und Linkshändern unterschiedlich lateralisiert. Bilden Menschen mit unterschiedlichen Körpertypen verschiedene Konzepte und Wortbedeutungen? Gemäß der Körperspezifitätshypothese sollten sie das tun [1]. Weil geistige Fähigkeiten vom Körper abhängen, sollten Menschen mit unterschiedlichen Körpertypen auch unterschiedlich denken. Diese Annahme stellt die klassische Auffassung in Frage, dass Konzepte universal und Wortbedeutungen identisch sind für alle Sprecher einer Sprache. Untersuchungen im Projekt „Sprache in Aktion“ am MPI für Psycholinguistik zeigen, dass die Art und Weise, wie Sprecher ihre Körper nutzen, die Art und Weise beeinflusst, wie sie sich im Gehirn Handlungen vorstellen und wie sie Sprache, die solche Handlungen thematisiert, im Gehirn verarbeiten.
  • Dediu, D. (2010). Linguistic and genetic diversity - how and why are they related? In M. Brüne, F. Salter, & W. McGrew (Eds.), Building bridges between anthropology, medicine and human ethology: Tributes to Wulf Schiefenhövel (pp. 169-178). Bochum: Europäischer Universitätsverlag.

    Abstract

    There are some 6000 languages spoken today, classfied in approximately 90 linguistic families and many isolates, and also differing across structural, typological, dimensions. Genetically, the human species is remarkably homogeneous, with the existant genetic diversity mostly explain by intra-population differences between individuals, but the remaining inter-population differences have a non-trivial structure. Populations splits and contacts influence both languages and genes, in principle allowing them to evolve in parallel ways. The farming/language co-dispersal hypothesis is a well-known such theory, whereby farmers spreading agriculture from its places of origin also spread their genes and languages. A different type of relationship was recently proposed, involving a genetic bias which influences the structural properties of language as it is transmitted across generations. Such a bias was proposed to explain the correlations between the distribution of tone languages and two brain development-related human genes and, if confirmed by experimental studies, it could represent a new factor explaining the distrbution of diversity. The present chapter overviews these related topics in the hope that a truly interdisciplinary approach could allow a better understanding of our complex (recent as well as evolutionary) history.
  • Dolscheid, S., Shayan, S., Ozturk, O., Majid, A., & Casasanto, D. (2010). Language shapes mental representations of musical pitch: Implications for metaphorical language processing [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 137). York: University of York.

    Abstract

    Speakers often use spatial metaphors to talk about musical pitch (e.g., a low note, a high soprano). Previous experiments suggest that English speakers also think about pitches as high or low in space, even when theyʼre not using language or musical notation (Casasanto, 2010). Do metaphors in language merely reflect pre-existing associations between space and pitch, or might language also shape these non-linguistic metaphorical mappings? To investigate the role of language in pitch tepresentation, we conducted a pair of non-linguistic spacepitch interference experiments in speakers of two languages that use different spatial metaphors. Dutch speakers usually describe pitches as ʻhighʼ (hoog) and ʻlowʼ (laag). Farsi speakers, however, often describe high-frequency pitches as ʻthinʼ (naazok) and low-frequency pitches as ʻthickʼ (koloft). Do Dutch and Farsi speakers mentally represent pitch differently? To find out, we asked participants to reproduce musical pitches that they heard in the presence of irrelevant spatial information (i.e., lines that varied either in height or in thickness). For the Height Interference experiment, horizontal lines bisected a vertical reference line at one of nine different locations. For the Thickness Interference experiment, a vertical line appeared in the middle of the screen in one of nine thicknesses. In each experiment, the nine different lines were crossed with nine different pitches ranging from C4 to G#4 in semitone increments, to produce 81 distinct trials. If Dutch and Farsi speakers mentally represent pitch the way they talk about it, using different kinds of spatial representations, they should show contrasting patterns of cross-dimensional interference: Dutch speakersʼ pitch estimates should be more strongly affected by irrelevant height information, and Farsi speakersʼ by irrelevant thickness information. As predicted, Dutch speakersʼ pitch estimates were significantly modulated by spatial height but not by thickness. Conversely, Farsi speakersʼ pitch estimates were modulated by spatial thickness but not by height (2x2 ANOVA on normalized slopes of the effect of space on pitch: F(1,71)=17,15 p<.001). To determine whether language plays a causal role in shaping pitch representations, we conducted a training experiment. Native Dutch speakers learned to use Farsi-like metaphors, describing pitch relationships in terms of thickness (e.g., a cello sounds ʻthickerʼ than a flute). After training, Dutch speakers showed a significant effect of Thickness interference in the non-linguistic pitch reproduction task, similar to native Farsi speakers: on average, pitches accompanied by thicker lines were reproduced as lower in pitch (effect of thickness on pitch: r=-.22, p=.002). By conducting psychophysical tasks, we tested the ʻWhorfianʼ question without using words. Yet, results also inform theories of metaphorical language processing. According to psycholinguistic theories (e.g., Bowdle & Gentner, 2005), highly conventional metaphors are processed without any active mapping from the source to the target domain (e.g., from space to pitch). Our data, however, suggest that when people use verbal metaphors they activate a corresponding non-linguistic mapping from either height or thickness to pitch, strengthening this association at the expense of competing associations. As a result, people who use different metaphors in their native languages form correspondingly different representations of musical pitch. Casasanto, D. (2010). Space for Thinking. In Language, Cognition and Space: State of the art and new directions. V. Evans & P. Chilton (Eds.), 453-478, London: Equinox Publishing. Bowdle, B. & Gentner, D. (2005). The career of metaphor. Psychological Review, 112, 193-216.
  • Folia, V., Uddén, J., De Vries, M., Forkstam, C., & Petersson, K. M. (2010). Artificial language learning in adults and children. Language learning, 60(s2), 188-220. doi:10.1111/j.1467-9922.2010.00606.x.

    Abstract

    This article briefly reviews some recent work on artificial language learning in children and adults. The final part of the article is devoted to a theoretical formulation of the language learning problem from a mechanistic neurobiological viewpoint and we show that it is logically possible to combine the notion of innate language constraints with, for example, the notion of domain general learning mechanisms. A growing body of empirical evidence suggests that the mechanisms involved in artificial language learning and in structured sequence processing are shared with those of natural language acquisition and natural language processing. Finally, by theoretically analyzing a formal learning model, we highlight Fodor’s insight that it is logically possible to combine innate, domain-specific constraints with domain-general learning mechanisms.
  • Folia, V., Uddén, J., De Vries, M., Forkstam, C., & Petersson, K. M. (2010). Artificial language learning in adults and children. In M. Gullberg, & P. Indefrey (Eds.), The earliest stages of language learning (pp. 188-220). Malden, MA: Wiley-Blackwell.
  • Fournier, R., Gussenhoven, C., Jensen, O., & Hagoort, P. (2010). Lateralization of tonal and intonational pitch processing: An MEG study. Brain Research, 1328, 79-88. doi:10.1016/j.brainres.2010.02.053.

    Abstract

    An MEG experiment was carried out in order to compare the processing of lexical-tonal and intonational contrasts, based on the tonal dialect of Roermond (the Netherlands). A set of words with identical phoneme sequences but distinct pitch contours, which represented different lexical meanings or discourse meanings (statement vs. question), were presented to native speakers as well as to a control group of speakers of Standard Dutch, a non-tone language. The stimuli were arranged in a mismatch paradigm, under three experimental conditions: in the first condition (lexical), the pitch contour differences between standard and deviant stimuli reflected differences between lexical meanings; in the second condition (intonational), the stimuli differed in their discourse meaning; in the third condition (combined), they differed both in their lexical and discourse meaning. In all three conditions, native as well as non-native responses showed a clear MMNm (magnetic mismatch negativity) in a time window from 150 to 250 ms after the divergence point of standard and deviant pitch contours. In the lexical condition, a stronger response was found over the left temporal cortex of native as well as non-native speakers. In the intonational condition, the same activation pattern was observed in the control group, but not in the group of native speakers, who showed a right-hemisphere dominance instead. Finally, in the combined (lexical and intonational) condition, brain reactions appeared to represent the summation of the patterns found in the other two conditions. In sum, the lateralization of pitch processing is condition-dependent in the native group only, which suggests that language experience determines how processes should be distributed over both temporal cortices, according to the functions available in the grammar.
  • Furman, R., Ozyurek, A., & Küntay, A. C. (2010). Early language-specificity in Turkish children's caused motion event expressions in speech and gesture. In K. Franich, K. M. Iserman, & L. L. Keil (Eds.), Proceedings of the 34th Boston University Conference on Language Development. Volume 1 (pp. 126-137). Somerville, MA: Cascadilla Press.
  • Groen, W. B., Tesink, C. M. J. Y., Petersson, K. M., Van Berkum, J. J. A., Van der Gaag, R. J., Hagoort, P., & Buitelaar, J. K. (2010). Semantic, factual, and social language comprehension in adolescents with autism: An fMRI study. Cerebral Cortex, 20(8), 1937-1945. doi:10.1093/cercor/bhp264.

    Abstract

    Language in high-functioning autism is characterized by pragmatic and semantic deficits, and people with autism have a reduced tendency to integrate information. Because the left and right inferior frontal (LIF and RIF) regions are implicated with integration of speaker information, world knowledge, and semantic knowledge, we hypothesized that abnormal functioning of the LIF and RIF regions might contribute to pragmatic and semantic language deficits in autism. Brain activation of sixteen 12- to 18-year-old, high-functioning autistic participants was measured with functional magnetic resonance imaging during sentence comprehension and compared with that of twenty-six matched controls. The content of the pragmatic sentence was congruent or incongruent with respect to the speaker characteristics (male/female, child/adult, and upper class/lower class). The semantic- and world-knowledge sentences were congruent or incongruent with respect to semantic expectancies and factual expectancies about the world, respectively. In the semanticknowledge and world-knowledge condition, activation of the LIF region did not differ between groups. In sentences that required integration of speaker information, the autism group showed abnormally reduced activation of the LIF region. The results suggest that people with autism may recruit the LIF region in a different manner in tasks that demand integration of social information.
  • Jasmin, K., & Casasanto, D. (2010). Stereotyping: How the QWERTY keyboard shapes the mental lexicon [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 159). York: University of York.
  • Junge, C., Cutler, A., & Hagoort, P. (2010). Ability to segment words from speech as a precursor of later language development: Insights from electrophysiological responses in the infant brain. In M. Burgess, J. Davey, C. Don, & T. McMinn (Eds.), Proceedings of 20th International Congress on Acoustics, ICA 2010. Incorporating Proceedings of the 2010 annual conference of the Australian Acoustical Society (pp. 3727-3732). Australian Acoustical Society, NSW Division.
  • Junge, C., Hagoort, P., Kooijman, V., & Cutler, A. (2010). Brain potentials for word segmentation at seven months predict later language development. In K. Franich, K. M. Iserman, & L. L. Keil (Eds.), Proceedings of the 34th Annual Boston University Conference on Language Development. Volume 1 (pp. 209-220). Somerville, MA: Cascadilla Press.
  • Kelly, S. D., Ozyurek, A., & Maris, E. (2010). Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension. Psychological Science, 21, 260-267. doi:10.1177/0956797609357327.

    Abstract

    Gesture and speech are assumed to form an integrated system during language production. Based on this view, we propose the integrated‐systems hypothesis, which explains two ways in which gesture and speech are integrated—through mutual and obligatory interactions—in language comprehension. Experiment 1 presented participants with action primes (e.g., someone chopping vegetables) and bimodal speech and gesture targets. Participants related primes to targets more quickly and accurately when they contained congruent information (speech: “chop”; gesture: chop) than when they contained incongruent information (speech: “chop”; gesture: twist). Moreover, the strength of the incongruence affected processing, with fewer errors for weak incongruities (speech: “chop”; gesture: cut) than for strong incongruities (speech: “chop”; gesture: twist). Crucial for the integrated‐systems hypothesis, this influence was bidirectional. Experiment 2 demonstrated that gesture’s influence on speech was obligatory. The results confirm the integrated‐systems hypothesis and demonstrate that gesture and speech form an integrated system in language comprehension.
  • Kita, S., Ozyurek, A., Allen, S., & Ishizuka, T. (2010). Early links between iconic gestures and sound symbolic words: Evidence for multimodal protolanguage. In A. D. Smith, M. Schouwstra, B. de Boer, & K. Smith (Eds.), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 429-430). Singapore: World Scientific.
  • Kos, M., Vosse, T. G., Van den Brink, D., & Hagoort, P. (2010). About edible restaurants: Conflicts between syntax and semantics as revealed by ERPs. Frontiers in Psychology, 1, E222. doi:10.3389/fpsyg.2010.00222.

    Abstract

    In order to investigate conflicts between semantics and syntax, we recorded ERPs, while participants read Dutch sentences. Sentences containing conflicts between syntax and semantics (Fred eats in a sandwich…/ Fred eats a restaurant…) elicited an N400. These results show that conflicts between syntax and semantics not necessarily lead to P600 effects and are in line with the processing competition account. According to this parallel account the syntactic and semantic processing streams are fully interactive and information from one level can influence the processing at another level. The relative strength of the cues of the processing streams determines which level is affected most strongly by the conflict. The processing competition account maintains the distinction between the N400 as index for semantic processing and the P600 as index for structural processing.
  • Ladd, D. R., & Dediu, D. (2010). Reply to Järvikivi et al. (2010) [Web log message]. Plos One. Retrieved from http://www.plosone.org/article/comments/info%3Adoi%2F10.1371%2Fjournal.pone.0012603.
  • Levy, J. (2010). In cerebro unveiling unconscious mechanisms during reading. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Maguire, W., McMahon, A., Heggarty, P., & Dediu, D. (2010). The past, present, and future of English dialects: Quantifying convergence, divergence, and dynamic equilibrium. Language Variation and Change, 22, 69-104. doi:10.1017/S0954394510000013.

    Abstract

    This article reports on research which seeks to compare and measure the similarities between phonetic transcriptions in the analysis of relationships between varieties of English. It addresses the question of whether these varieties have been converging, diverging, or maintaining equilibrium as a result of endogenous and exogenous phonetic and phonological changes. We argue that it is only possible to identify such patterns of change by the simultaneous comparison of a wide range of varieties of a language across a data set that has not been specifically selected to highlight those changes that are believed to be important. Our analysis suggests that although there has been an obvious reduction in regional variation with the loss of traditional dialects of English and Scots, there has not been any significant convergence (or divergence) of regional accents of English in recent decades, despite the rapid spread of a number of features such as TH-fronting.
  • Menenti, L. (2010). The right language: Differential hemispheric contributions to language production and comprehension in context. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Merritt, D. J., Casasanto, D., & Brannon, E. M. (2010). Do monkeys think in metaphors? Representations of space and time in monkeys and humans. Cognition, 117, 191-202. doi:10.1016/j.cognition.2010.08.011.

    Abstract

    Research on the relationship between the representation of space and time has produced two contrasting proposals. ATOM posits that space and time are represented via a common magnitude system, suggesting a symmetrical relationship between space and time. According to metaphor theory, however, representations of time depend on representations of space asymmetrically. Previous findings in humans have supported metaphor theory. Here, we investigate the relationship between time and space in a nonverbal species, by testing whether non-human primates show space–time interactions consistent with metaphor theory or with ATOM. We tested two rhesus monkeys and 16 adult humans in a nonverbal task that assessed the influence of an irrelevant dimension (time or space) on a relevant dimension (space or time). In humans, spatial extent had a large effect on time judgments whereas time had a small effect on spatial judgments. In monkeys, both spatial and temporal manipulations showed large bi-directional effects on judgments. In contrast to humans, spatial manipulations in monkeys did not produce a larger effect on temporal judgments than the reverse. Thus, consistent with previous findings, human adults showed asymmetrical space–time interactions that were predicted by metaphor theory. In contrast, monkeys showed patterns that were more consistent with ATOM.
  • Meulenbroek, O., Kessels, R. P. C., De Rover, M., Petersson, K. M., Olde Rikkert, M. G. M., Rijpkema, M., & Fernández, G. (2010). Age-effects on associative object-location memory. Brain Research, 1315, 100-110. doi:10.1016/j.brainres.2009.12.011.

    Abstract

    Aging is accompanied by an impairment of associative memory. The medial temporal lobe and fronto-striatal network, both involved in associative memory, are known to decline functionally and structurally with age, leading to the so-called associative binding deficit and the resource deficit. Because the MTL and fronto-striatal network interact, they might also be able to support each other. We therefore employed an episodic memory task probing memory for sequences of object–location associations, where the demand on self-initiated processing was manipulated during encoding: either all the objects were visible simultaneously (rich environmental support) or every object became visible transiently (poor environmental support). Following the concept of resource deficit, we hypothesised that the elderly probably have difficulty using their declarative memory system when demands on self-initiated processing are high (poor environmental support). Our behavioural study showed that only the young use the rich environmental support in a systematic way, by placing the objects next to each other. With the task adapted for fMRI, we found that elderly showed stronger activity than young subjects during retrieval of environmentally richly encoded information in the basal ganglia, thalamus, left middle temporal/fusiform gyrus and right medial temporal lobe (MTL). These results indicate that rich environmental support leads to recruitment of the declarative memory system in addition to the fronto-striatal network in elderly, while the young use more posterior brain regions likely related to imagery. We propose that elderly try to solve the task by additional recruitment of stimulus-response associations, which might partly compensate their limited attentional resources.

Share this page