Publications

Displaying 1 - 100 of 110
  • Arunkumar, M., Van Paridon, J., Ostarek, M., & Huettig, F. (2021). Do illiterates have illusions? A conceptual (non)replication of Luria (1976). Journal of Cultural Cognitive Science, 5, 143-158. doi:10.1007/s41809-021-00080-x.

    Abstract

    Luria (1976) famously observed that people who never learnt to read and write do not perceive visual illusions. We conducted a conceptual replication of the Luria study of the effect of literacy on the processing of visual illusions. We designed two carefully controlled experiments with 161 participants with varying literacy levels ranging from complete illiterates to high literates in Chennai, India. Accuracy and reaction time in the identification of two types of visual shape and color illusions and the identification of appropriate control images were measured. Separate statistical analyses of Experiments 1 and 2 as well as pooled analyses of both experiments do not provide any support for the notion that literacy effects the perception of visual illusions. Our large sample, carefully controlled study strongly suggests that literacy does not meaningfully affect the identification of visual illusions and raises some questions about other reports about cultural effects on illusion perception.
  • Bakker-Marshall, I., Takashima, A., Fernandez, C. B., Janzen, G., McQueen, J. M., & Van Hell, J. G. (2021). Overlapping and distinct neural networks supporting novel word learning in bilinguals and monolinguals. Bilingualism: Language and Cognition, 24(3), 524-536. doi:10.1017/S1366728920000589.

    Abstract

    This study investigated how bilingual experience alters neural mechanisms supporting novel word learning. We hypothesised that novel words elicit increased semantic activation in the larger bilingual lexicon, potentially stimulating stronger memory integration than in monolinguals. English monolinguals and Spanish–English bilinguals were trained on two sets of written Swahili–English word pairs, one set on each of two consecutive days, and performed a recognition task in the MRI-scanner. Lexical integration was measured through visual primed lexical decision. Surprisingly, no group difference emerged in explicit word memory, and priming occurred only in the monolingual group. This difference in lexical integration may indicate an increased need for slow neocortical interleaving of old and new information in the denser bilingual lexicon. The fMRI data were consistent with increased use of cognitive control networks in monolinguals and of articulatory motor processes in bilinguals, providing further evidence for experience-induced neural changes: monolinguals and bilinguals reached largely comparable behavioural performance levels in novel word learning, but did so by recruiting partially overlapping but non-identical neural systems to acquire novel words.
  • Drijvers, L., Jensen, O., & Spaak, E. (2021). Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information. Human Brain Mapping, 42(4), 1138-1152. doi:10.1002/hbm.25282.

    Abstract

    During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1440 Hz refresh rate). Integration ease was manipulated by auditory factors (clear/degraded speech) and visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual – fauditory = 7 Hz), specifically when integration was easiest (i.e., when speech was clear and accompanied by a congruent gesture). This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.
  • Duprez, J., Stokkermans, M., Drijvers, L., & Cohen, M. X. (2021). Synchronization between keyboard typing and neural oscillations. Journal of Cognitive Neuroscience, 33(5), 887-901. doi:10.1162/jocn_a_01692.

    Abstract

    Rhythmic neural activity synchronizes with certain rhythmic behaviors, such as breathing, sniffing, saccades, and speech. The extent to which neural oscillations synchronize with higher-level and more complex behaviors is largely unknown. Here we investigated electrophysiological synchronization with keyboard typing, which is an omnipresent behavior daily engaged by an uncountably large number of people. Keyboard typing is rhythmic with frequency characteristics roughly the same as neural oscillatory dynamics associated with cognitive control, notably through midfrontal theta (4 -7 Hz) oscillations. We tested the hypothesis that synchronization occurs between typing and midfrontal theta, and breaks down when errors are committed. Thirty healthy participants typed words and sentences on a keyboard without visual feedback, while EEG was recorded. Typing rhythmicity was investigated by inter-keystroke interval analyses and by a kernel density estimation method. We used a multivariate spatial filtering technique to investigate frequency-specific synchronization between typing and neuronal oscillations. Our results demonstrate theta rhythmicity in typing (around 6.5 Hz) through the two different behavioral analyses. Synchronization between typing and neuronal oscillations occurred at frequencies ranging from 4 to 15 Hz, but to a larger extent for lower frequencies. However, peak synchronization frequency was idiosyncratic across subjects, therefore not specific to theta nor to midfrontal regions, and correlated somewhat with peak typing frequency. Errors and trials associated with stronger cognitive control were not associated with changes in synchronization at any frequency. As a whole, this study shows that brain-behavior synchronization does occur during keyboard typing but is not specific to midfrontal theta.
  • Eekhof, L. S., Kuijpers, M. M., Faber, M., Gao, X., Mak, M., Van den Hoven, E., & Willems, R. M. (2021). Lost in a story, detached from the words. Discourse Processes, 58(7), 595-616. doi:10.1080/0163853X.2020.1857619.

    Abstract

    This article explores the relationship between low- and high-level aspects of reading by studying the interplay between word processing, as measured with eye tracking, and narrative absorption and liking, as measured with questionnaires. Specifically, we focused on how individual differences in sensitivity to lexical word characteristics—measured as the effect of these characteristics on gaze duration—were related to narrative absorption and liking. By reanalyzing a large data set consisting of three previous eye-tracking experiments in which subjects (N = 171) read literary short stories, we replicated the well-established finding that word length, lemma frequency, position in sentence, age of acquisition, and orthographic neighborhood size of words influenced gaze duration. More importantly, we found that individual differences in the degree of sensitivity to three of these word characteristics, i.e., word length, lemma frequency, and age of acquisition, were negatively related to print exposure and to a lesser degree to narrative absorption and liking. Even though the underlying mechanisms of this relationship are still unclear, we believe the current findings underline the need to map out the interplay between, on the one hand, the technical and, on the other hand, the subjective processes of reading by studying reading behavior in more natural settings.

    Additional information

    Analysis scripts and data
  • Eekhof, L. S., Van Krieken, K., Sanders, J., & Willems, R. M. (2021). Reading minds, reading stories: Social-cognitive abilities affect the linguistic processing of narrative viewpoint. Frontiers in Psychology, 12: 698986. doi:10.3389/fpsyg.2021.698986.

    Abstract

    Although various studies have shown that narrative reading draws on social-cognitive abilities, not much is known about the precise aspects of narrative processing that engage these abilities. We hypothesized that the linguistic processing of narrative viewpoint—expressed by elements that provide access to the inner world of characters—might play an important role in engaging social-cognitive abilities. Using eye tracking, we studied the effect of lexical markers of perceptual, cognitive, and emotional viewpoint on eye movements during reading of a 5,000-word narrative. Next, we investigated how this relationship was modulated by individual differences in social-cognitive abilities. Our results show diverging patterns of eye movements for perceptual viewpoint markers on the one hand, and cognitive and emotional viewpoint markers on the other. Whereas the former are processed relatively fast compared to non-viewpoint markers, the latter are processed relatively slow. Moreover, we found that social-cognitive abilities impacted the processing of words in general, and of perceptual and cognitive viewpoint markers in particular, such that both perspective-taking abilities and self-reported perspective-taking traits facilitated the processing of these markers. All in all, our study extends earlier findings that social cognition is of importance for story reading, showing that individual differences in social-cognitive abilities are related to the linguistic processing of narrative viewpoint.

    Additional information

    supplementary material
  • Healthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M. and 67 moreHealthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M., Füllbrunn, S., Gayet, S., Van Gelder, M. M. H. J., Van Gerven, M., Geurts, S., Greven, C. U., Groefsema, M., Haak, K., Hagoort, P., Hartman, Y., Van der Heijden, B., Hermans, E., Heuvelmans, V., Hintz, F., Den Hollander, J., Hulsman, A. M., Idesis, S., Jaeger, M., Janse, E., Janzing, J., Kessels, R. P. C., Karremans, J. C., De Kleijn, W., Klein, M., Klumpers, F., Kohn, N., Korzilius, H., Krahmer, B., De Lange, F., Van Leeuwen, J., Liu, H., Luijten, M., Manders, P., Manevska, K., Marques, J. P., Matthews, J., McQueen, J. M., Medendorp, P., Melis, R., Meyer, A. S., Oosterman, J., Overbeek, L., Peelen, M., Popma, J., Postma, G., Roelofs, K., Van Rossenberg, Y. G. T., Schaap, G., Scheepers, P., Selen, L., Starren, M., Swinkels, D. W., Tendolkar, I., Thijssen, D., Timmerman, H., Tutunji, R., Tuladhar, A., Veling, H., Verhagen, M., Verkroost, J., Vink, J., Vriezekolk, V., Vrijsen, J., Vyrastekova, J., Van der Wal, S., Willems, R. M., & Willemsen, A. (2021). Protocol of the Healthy Brain Study: An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context. PLoS One, 16(12): e0260952. doi:10.1371/journal.pone.0260952.

    Abstract

    The endeavor to understand the human brain has seen more progress in the last few decades than in the previous two millennia. Still, our understanding of how the human brain relates to behavior in the real world and how this link is modulated by biological, social, and environmental factors is limited. To address this, we designed the Healthy Brain Study (HBS), an interdisciplinary, longitudinal, cohort study based on multidimensional, dynamic assessments in both the laboratory and the real world. Here, we describe the rationale and design of the currently ongoing HBS. The HBS is examining a population-based sample of 1,000 healthy participants (age 30-39) who are thoroughly studied across an entire year. Data are collected through cognitive, affective, behavioral, and physiological testing, neuroimaging, bio-sampling, questionnaires, ecological momentary assessment, and real-world assessments using wearable devices. These data will become an accessible resource for the scientific community enabling the next step in understanding the human brain and how it dynamically and individually operates in its bio-social context. An access procedure to the collected data and bio-samples is in place and published on https://www.healthybrainstudy.nl/en/data-and-methods.

    https://www.trialregister.nl/trial/7955

    Additional information

    supplementary material
  • Heidlmayr, K., Ferragne, E., & Isel, F. (2021). Neuroplasticity in the phonological system: The PMN and the N400 as markers for the perception of non-native phonemic contrasts by late second language learners. Neuropsychologia, 156: 107831. doi:10.1016/j.neuropsychologia.2021.107831.

    Abstract

    Second language (L2) learners frequently encounter persistent difficulty in perceiving certain non-native sound contrasts, i.e., a phenomenon called “phonological deafness”. However, if extensive L2 experience leads to neuroplastic changes in the phonological system, then the capacity to discriminate non-native phonemic contrasts should progressively improve. Such perceptual changes should be attested by modifications at the neurophysiological level. We designed an EEG experiment in which the listeners’ perceptual capacities to discriminate second language phonemic contrasts influence the processing of lexical-semantic violations. Semantic congruency of critical words in a sentence context was driven by a phonemic contrast that was unique to the L2, English (e.g.,/ɪ/-/i:/, ship – sheep). Twenty-eight young adult native speakers of French with intermediate proficiency in English listened to sentences that contained either a semantically congruent or incongruent critical word (e.g., The anchor of the ship/*sheep was let down) while EEG was recorded. Three ERP effects were found to relate to increasing L2 proficiency: (1) a left frontal auditory N100 effect, (2) a smaller fronto-central phonological mismatch negativity (PMN) effect and (3) a semantic N400 effect. No effect of proficiency was found on oscillatory markers. The current findings suggest that neuronal plasticity in the human brain allows for the late acquisition of even hard-wired linguistic features such as the discrimination of phonemic contrasts in a second language. This is the first time that behavioral and neurophysiological evidence for the critical role of neural plasticity underlying L2 phonological processing and its interdependence with semantic processing has been provided. Our data strongly support the idea that pieces of information from different levels of linguistic processing (e.g., phonological, semantic) strongly interact and influence each other during online language processing.

    Additional information

    supplementary material
  • Heyselaar, E., Peeters, D., & Hagoort, P. (2021). Do we predict upcoming speech content in naturalistic environments? Language, Cognition and Neuroscience, 36(4), 440-461. doi:10.1080/23273798.2020.1859568.

    Abstract

    The ability to predict upcoming actions is a hallmark of cognition. It remains unclear, however, whether the predictive behaviour observed in controlled lab environments generalises to rich, everyday settings. In four virtual reality experiments, we tested whether a well-established marker of linguistic prediction (anticipatory eye movements) replicated when increasing the naturalness of the paradigm by means of immersing participants in naturalistic scenes (Experiment 1), increasing the number of distractor objects (Experiment 2), modifying the proportion of predictable noun-referents (Experiment 3), and manipulating the location of referents relative to the joint attentional space (Experiment 4). Robust anticipatory eye movements were observed for Experiments 1–3. The anticipatory effect disappeared, however, in Experiment 4. Our findings suggest that predictive processing occurs in everyday communication if the referents are situated in the joint attentional space. Methodologically, our study confirms that ecological validity and experimental control may go hand-in-hand in the study of human predictive behaviour.
  • Hoeksema, N., Verga, L., Mengede, J., Van Roessel, C., Villanueva, S., Salazar-Casals, A., Rubio-Garcia, A., Curcic-Blake, B., Vernes, S. C., & Ravignani, A. (2021). Neuroanatomy of the grey seal brain: Bringing pinnipeds into the neurobiological study of vocal learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200252. doi:10.1098/rstb.2020.0252.

    Abstract

    Comparative studies of vocal learning and vocal non-learning animals can increase our understanding of the neurobiology and evolution of vocal learning and human speech. Mammalian vocal learning is understudied: most research has either focused on vocal learning in songbirds or its absence in non-human primates. Here we focus on a highly promising model species for the neurobiology of vocal learning: grey seals. We provide a neuroanatomical atlas (based on dissected brain slices and magnetic resonance images), a labelled MRI template, a 3D model with volumetric measurements of brain regions, and histological cortical stainings. Four main features of the grey seal brain stand out. (1) It is relatively big and highly convoluted. (2) It hosts a relatively large temporal lobe and cerebellum, structures which could support developed timing abilities and acoustic processing. (3) The cortex is similar to humans in thickness and shows the expected six-layered mammalian structure. (4) Expression of FoxP2 - a gene involved in vocal learning and spoken language - is present in deeper layers of the cortex. Our results could facilitate future studies targeting the neural and genetic underpinnings of mammalian vocal learning, thus bridging the research gap from songbirds to humans and non-human primates.Competing Interest StatementThe authors have declared no competing interest.
  • Horan Skilton, A., & Peeters, D. (2021). Cross-linguistic differences in demonstrative systems: Comparing spatial and non-spatial influences on demonstrative use in Ticuna and Dutch. Journal of Pragmatics, 180, 248-265. doi:10.1016/j.pragma.2021.05.001.

    Abstract

    In all spoken languages, speakers use demonstratives – words like this and that – to refer to entities in their immediate environment. But which factors determine whether they use one demonstrative (this) or another (that)? Here we report the results of an experiment examining the effects of referent visibility, referent distance, and addressee location on the production of demonstratives by speakers of Ticuna (isolate; Brazil, Colombia, Peru), an Amazonian language with four demonstratives, and speakers of Dutch (Indo-European; Netherlands, Belgium), which has two demonstratives. We found that Ticuna speakers’ use of demonstratives displayed effects of addressee location and referent distance, but not referent visibility. By contrast, under comparable conditions, Dutch speakers displayed sensitivity only to referent distance. Interestingly, we also observed that Ticuna speakers consistently used demonstratives in all referential utterances in our experimental paradigm, while Dutch speakers strongly preferred to use definite articles. Taken together, these findings shed light on the significant diversity found in demonstrative systems across languages. Additionally, they invite researchers studying exophoric demonstratives to broaden their horizons by cross-linguistically investigating the factors involved in speakers’ choice of demonstratives over other types of referring expressions, especially articles.
  • Huizeling, E., Wang, H., Holland, C., & Kessler, K. (2021). Changes in theta and alpha oscillatory signatures of attentional control in older and middle age. European Journal of Neuroscience, 54(1), 4314-4337. doi:10.1111/ejn.15259.

    Abstract

    Recent behavioural research has reported age-related changes in the costs of refocusing attention from a temporal (rapid serial visual presentation) to a spatial (visual search) task. Using magnetoencephalography, we have now compared the neural signatures of attention refocusing between three age groups (19–30, 40–49 and 60+ years) and found differences in task-related modulation and cortical localisation of alpha and theta oscillations. Efficient, faster refocusing in the youngest group compared to both middle age and older groups was reflected in parietal theta effects that were significantly reduced in the older groups. Residual parietal theta activity in older individuals was beneficial to attentional refocusing and could reflect preserved attention mechanisms. Slowed refocusing of attention, especially when a target required consolidation, in the older and middle-aged adults was accompanied by a posterior theta deficit and increased recruitment of frontal (middle-aged and older groups) and temporal (older group only) areas, demonstrating a posterior to anterior processing shift. Theta but not alpha modulation correlated with task performance, suggesting that older adults' stronger and more widely distributed alpha power modulation could reflect decreased neural precision or dedifferentiation but requires further investigation. Our results demonstrate that older adults present with different alpha and theta oscillatory signatures during attentional control, reflecting cognitive decline and, potentially, also different cognitive strategies in an attempt to compensate for decline.

    Additional information

    supplementary material
  • Levshina, N. (2021). Cross-linguistic trade-offs and causal relationships between cues to grammatical subject and object, and the problem of efficiency-related explanations. Frontiers in Psychology, 12: 648200. doi:10.3389/fpsyg.2021.648200.

    Abstract

    Cross-linguistic studies focus on inverse correlations (trade-offs) between linguistic variables that reflect different cues to linguistic meanings. For example, if a language has no case marking, it is likely to rely on word order as a cue for identification of grammatical roles. Such inverse correlations are interpreted as manifestations of language users’ tendency to use language efficiently. The present study argues that this interpretation is problematic. Linguistic variables, such as the presence of case, or flexibility of word order, are aggregate properties, which do not represent the use of linguistic cues in context directly. Still, such variables can be useful for circumscribing the potential role of communicative efficiency in language evolution, if we move from cross-linguistic trade-offs to multivariate causal networks. This idea is illustrated by a case study of linguistic variables related to four types of Subject and Object cues: case marking, rigid word order of Subject and Object, tight semantics and verb-medial order. The variables are obtained from online language corpora in thirty languages, annotated with the Universal Dependencies. The causal model suggests that the relationships between the variables can be explained predominantly by sociolinguistic factors, leaving little space for a potential impact of efficient linguistic behavior.
  • Levshina, N., & Moran, S. (2021). Efficiency in human languages: Corpus evidence for universal principles. Linguistics Vanguard, 7(s3): 20200081. doi:10.1515/lingvan-2020-0081.

    Abstract

    Over the last few years, there has been a growing interest in communicative efficiency. It has been argued that language users act efficiently, saving effort for processing and articulation, and that language structure and use reflect this tendency. The emergence of new corpus data has brought to life numerous studies on efficient language use in the lexicon, in morphosyntax, and in discourse and phonology in different languages. In this introductory paper, we discuss communicative efficiency in human languages, focusing on evidence of efficient language use found in multilingual corpora. The evidence suggests that efficiency is a universal feature of human language. We provide an overview of different manifestations of efficiency on different levels of language structure, and we discuss the major questions and findings so far, some of which are addressed for the first time in the contributions in this special collection.
  • Levshina, N. (2021). Communicative efficiency and differential case marking: A reverse-engineering approach. Linguistics Vanguard, 7(s3): 20190087. doi:10.1515/lingvan-2019-0087.
  • Lopopolo, A., Van de Bosch, A., Petersson, K. M., & Willems, R. M. (2021). Distinguishing syntactic operations in the brain: Dependency and phrase-structure parsing. Neurobiology of Language, 2(1), 152-175. doi:10.1162/nol_a_00029.

    Abstract

    Finding the structure of a sentence — the way its words hold together to convey meaning — is a fundamental step in language comprehension. Several brain regions, including the left inferior frontal gyrus, the left posterior superior temporal gyrus, and the left anterior temporal pole, are supposed to support this operation. The exact role of these areas is nonetheless still debated. In this paper we investigate the hypothesis that different brain regions could be sensitive to different kinds of syntactic computations. We compare the fit of phrase-structure and dependency structure descriptors to activity in brain areas using fMRI. Our results show a division between areas with regard to the type of structure computed, with the left ATP and left IFG favouring dependency structures and left pSTG favouring phrase structures.
  • Mak, M., & Willems, R. M. (2021). Eyelit: Eye movement and reader response data during literary reading. Journal of open humanities data, 7: 25. doi:10.5334/johd.49.

    Abstract

    An eye-tracking data set is described of 102 participants reading three Dutch literary short stories each (7790 words in total per participant). The pre-processed data set includes (1) Fixation report, (2) Saccade report, (3) Interest Area report, (4) Trial report (aggregated data for each page), (5) Sample report (sampling rate = 500 Hz), (6) Questionnaire data on reading experiences and participant characteristics, and (7) word characteristics for all words (with the potential of calculating additional word characteristics). It is stored on DANS, and can be used to study word characteristics or literary reading and all facets of eye movements.
  • Mickan, A., McQueen, J. M., Valentini, B., Piai, V., & Lemhöfer, K. (2021). Electrophysiological evidence for cross-language interference in foreign-language attrition. Neuropsychologia, 155: 107795. doi:10.1016/j.neuropsychologia.2021.107795.

    Abstract

    Foreign language attrition (FLA) appears to be driven by interference from other, more recently-used languages (Mickan et al., 2020). Here we tracked these interference dynamics electrophysiologically to further our understanding of the underlying processes. Twenty-seven Dutch native speakers learned 70 new Italian words over two days. On a third day, EEG was recorded as they performed naming tasks on half of these words in English and, finally, as their memory for all the Italian words was tested in a picture-naming task. Replicating Mickan et al., recall was slower and tended to be less complete for Italian words that were interfered with (i.e., named in English) than for words that were not. These behavioral interference effects were accompanied by an enhanced frontal N2 and a decreased late positivity (LPC) for interfered compared to not-interfered items. Moreover, interfered items elicited more theta power. We also found an increased N2 during the interference phase for items that participants were later slower to retrieve in Italian. We interpret the N2 and theta effects as markers of interference, in line with the idea that Italian retrieval at final test is hampered by competition from recently practiced English translations. The LPC, in turn, reflects the consequences of interference: the reduced accessibility of interfered Italian labels. Finally, that retrieval ease at final test was related to the degree of interference during previous English retrieval shows that FLA is already set in motion during the interference phase, and hence can be the direct consequence of using other languages.

    Additional information

    data via Donders Repository
  • Misersky, J., Slivac, K., Hagoort, P., & Flecken, M. (2021). The State of the Onion: Grammatical aspect modulates object representation during event comprehension. Cognition, 214: 104744. doi:10.1016/j.cognition.2021.104744.

    Abstract

    The present ERP study assessed whether grammatical aspect is used as a cue in online event comprehension, in particular when reading about events in which an object is visually changed. While perfective aspect cues holistic event representations, including an event's endpoint, progressive aspect highlights intermediate phases of an event. In a 2 × 3 design, participants read SVO sentences describing a change-of-state event (e.g., to chop an onion), with grammatical Aspect manipulated (perfective “chopped” vs progressive “was chopping”). Thereafter, they saw a Picture of an object either having undergone substantial state-change (SC; a chopped onion), no state-change (NSC; an onion in its original state) or an unrelated object (U; a cactus, acting as control condition). Their task was to decide whether the object in the Picture was mentioned in the sentence. We focused on N400 modulation, with ERPs time-locked to picture onset. U pictures elicited an N400 response as expected, suggesting detection of categorical mismatches in object type. For SC and NSC pictures, a whole-head follow-up analysis revealed a P300, implying people were engaged in detailed evaluation of pictures of matching objects. SC pictures received most positive responses overall. Crucially, there was an interaction of Aspect and Picture: SC pictures resulted in a higher amplitude P300 after sentences in the perfective compared to the progressive. Thus, while the perfective cued for a holistic event representation, including the resultant state of the affected object (i.e., the chopped onion) constraining object representations online, the progressive defocused event completion and object-state change. Grammatical aspect thus guided online event comprehension by cueing the visual representation(s) of an object's state.
  • Montero-Melis, G. (2021). Consistency in motion event encoding across languages. Frontiers in Psychology, 12: 625153. doi:10.3389/fpsyg.2021.625153.

    Abstract

    Syntactic templates serve as schemas, allowing speakers to describe complex events in a systematic fashion. Motion events have long served as a prime example of how different languages favor different syntactic frames, in turn biasing their speakers towards different event conceptualizations. However, there is also variability in how motion events are syntactically framed within languages. Here we measure the consistency in event encoding in two languages, Spanish and Swedish. We test a dominant account in the literature, namely that variability within a language can be explained by specific properties of the events. This event-properties account predicts that descriptions of one and the same event should be consistent within a language, even in languages where there is overall variability in the use of syntactic frames. Spanish and Swedish speakers (N=84) described 32 caused motion events. While the most frequent syntactic framing in each language was as expected based on typology (Spanish: verb-framed, Swedish: satellite-framed, cf. Talmy, 2000), Swedish descriptions were substantially more consistent than Spanish descriptions. Swedish speakers almost invariably encoded all events with a single syntactic frame and systematically conveyed manner of motion. Spanish descriptions, in contrast, varied much more regarding syntactic framing and expression of manner. Crucially, variability in Spanish descriptions was not mainly a function of differences between events, as predicted by the event-properties account. Rather, Spanish variability in syntactic framing was driven by speaker biases. A similar picture arose for whether Spanish descriptions expressed manner information or not: Even after accounting for the effect of syntactic choice, a large portion of the variance in Spanish manner encoding remained attributable to differences among speakers. The results show that consistency in motion event encoding starkly differs across languages: Some languages (like Swedish) bias their speakers towards a particular linguistic event schema much more than others (like Spanish). Implications of these findings are discussed with respect to the typology of event framing, theories on the relationship between language and thought, and speech planning. In addition, the tools employed here to quantify variability can be applied to other domains of language.

    Additional information

    data and analysis scripts
  • Nieuwland, M. S. (2021). How ‘rational’ is semantic prediction? A critique and re-analysis of. Cognition, 215: 104848. doi:10.1016/j.cognition.2021.104848.

    Abstract

    In a recent article in Cognition, Delaney-Busch et al. (2019) claim evidence for ‘rational’, Bayesian adaptation of semantic predictions, using ERP data from Lau, Holcomb, and Kuperberg (2013). Participants read associatively related and unrelated prime-target word pairs in a first block with only 10% related trials and a second block with 50%. Related words elicited smaller N400s than unrelated words, and this difference was strongest in the second block, suggesting greater engagement in predictive processing. Using a rational adaptor model, Delaney-Busch et al. argue that the stronger N400 reduction for related words in the second block developed as a function of the number of related trials, and concluded therefore that participants predicted related words more strongly when their predictions were fulfilled more often. In this critique, I discuss two critical flaws in their analyses, namely the confounding of prediction effects with those of lexical frequency and the neglect of data from the first block. Re-analyses suggest a different picture: related words by themselves did not yield support for their conclusion, and the effect of relatedness gradually strengthened in othe two blocks in a similar way. Therefore, the N400 did not yield evidence that participants rationally adapted their semantic predictions. Within the framework proposed by Delaney-Busch et al., presumed semantic predictions may even be thought of as ‘irrational’. While these results yielded no evidence for rational or probabilistic prediction, they do suggest that participants became increasingly better at predicting target words from prime words.
  • Nieuwland, M. S. (2021). Commentary: Rational adaptation in lexical prediction: The influence of prediction strength. Frontiers in Psychology, 12: 735849. doi:10.3389/fpsyg.2021.735849.
  • Ortega, G., & Ostarek, M. (2021). Evidence for visual simulation during sign language processing. Journal of Experimental Psychology: General, 150(10), 2158-2166. doi:10.1037/xge0001041.

    Abstract

    What are the mental processes that allow us to understand the meaning of words? A large body of evidence suggests that when we process speech, we engage a process of perceptual simulation whereby sensorimotor states are activated as a source of semantic information. But does the same process take place when words are expressed with the hands and perceived through the eyes? To date, it is not known whether perceptual simulation is also observed in sign languages, the manual-visual languages of deaf communities. Continuous flash suppression is a method that addresses this question by measuring the effect of language on detection sensitivity to images that are suppressed from awareness. In spoken languages, it has been reported that listening to a word (e.g., “bottle”) activates visual features of an object (e.g., the shape of a bottle), and this in turn facilitates image detection. An interesting but untested question is whether the same process takes place when deaf signers see signs. We found that processing signs boosted the detection of congruent images, making otherwise invisible pictures visible. A boost of visual processing was observed only for signers but not for hearing nonsigners, suggesting that the penetration of the visual system through signs requires a fully fledged manual language. Iconicity did not modulate the effect of signs on detection, neither in signers nor in hearing nonsigners. This suggests that visual simulation during language processing occurs regardless of language modality (sign vs. speech) or iconicity, pointing to a foundational role of simulation for language comprehension.

    Additional information

    supplementary material
  • Ostarek, M., & Bottini, R. (2021). Towards strong inference in research on embodiment – Possibilities and limitations of causal paradigms. Journal of Cognition, 4(1): 5. doi:10.5334/joc.139.

    Abstract

    A central question in the cognitive sciences is which role embodiment plays for high-
    level cognitive functions, such as conceptual processing. here, we propose that one
    reason why progress regarding this question has been slow is a lacking focus on what
    platt (1964) called “strong inference”. strong inference is possible when results from an
    experimental paradigm are not merely consistent with a hypothesis, but they provide
    decisive evidence for one particular hypothesis compared to competing hypotheses. We
    discuss how causal paradigms, which test the functional relevance of sensory-motor
    processes for high-level cognitive functions, can move the field forward. in particular,
    we explore how congenital sensory-motor disorders, acquired sensory-motor deficits,
    and interference paradigms with healthy participants can be utilized as an opportunity
    to better understand the role of sensory experience in conceptual processing. Whereas
    all three approaches can bring about valuable insights, we highlight that the study of
    congenitally and acquired sensorimotor disorders is particularly effective in the case
    of conceptual domains with strong unimodal basis (e.g., colors), whereas interference
    paradigms with healthy participants have a broader application, avoid many of the
    practical and interpretational limitations of patient studies, and allow a systematic
    and step-wise progressive inference approach to causal mechanisms.
  • Poletiek, F. H., Monaghan, P., van de Velde, M., & Bocanegra, B. R. (2021). The semantics-syntax interface: Learning grammatical categories and hierarchical syntactic structure through semantics. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(7), 1141-1155. doi:10.1037/xlm0001044.

    Abstract

    Language is infinitely productive because syntax defines dependencies between grammatical categories of words and constituents, so there is interchangeability of these words and constituents within syntactic structures. Previous laboratory-based studies of language learning have shown that complex language structures like hierarchical center embeddings (HCE) are very hard to learn, but these studies tend to simplify the language learning task, omitting semantics and focusing either on learning dependencies between individual words or on acquiring the category membership of those words. We tested whether categories of words and dependencies between these categories and between constituents, could be learned simultaneously in an artificial language with HCE’s, when accompanied by scenes illustrating the sentence’s intended meaning. Across four experiments, we showed that participants were able to learn the HCE language varying words across categories and category-dependencies, and constituents across constituents-dependencies. They also were able to generalize the learned structure to novel sentences and novel scenes that they had not previously experienced. This simultaneous learning resulting in a productive complex language system, may be a consequence of grounding complex syntax acquisition in semantics.
  • Pouw, W., Proksch, S., Drijvers, L., Gamba, M., Holler, J., Kello, C., Schaefer, R. S., & Wiggins, G. A. (2021). Multilevel rhythms in multimodal communication. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200334. doi:10.1098/rstb.2020.0334.

    Abstract

    It is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal turn relative to traditional accounts of temporal aspects of animal communication which have focused on a single modality at a time. However, the fields that are currently contributing to the study of multimodal communication are highly varied, and still largely disconnected given their sole focus on a particular level of description or their particular concern with human or non-human animals. Here, we provide an integrative overview of converging findings that show how multimodal processes occurring at neural, bodily, as well as social interactional levels each contribute uniquely to the complex rhythms that characterize communication in human and non-human animals. Though we address findings for each of these levels independently, we conclude that the most important challenge in this field is to identify how processes at these different levels connect.
  • Preisig, B., Riecke, L., Sjerps, M. J., Kösem, A., Kop, B. R., Bramson, B., Hagoort, P., & Hervais-Adelman, A. (2021). Selective modulation of interhemispheric connectivity by transcranial alternating current stimulation influences binaural integration. Proceedings of the National Academy of Sciences of the United States of America, 118(7): e2015488118. doi:10.1073/pnas.2015488118.

    Abstract

    Brain connectivity plays a major role in the encoding, transfer, and
    integration of sensory information. Interregional synchronization
    of neural oscillations in the γ-frequency band has been suggested
    as a key mechanism underlying perceptual integration. In a recent
    study, we found evidence for this hypothesis showing that the
    modulation of interhemispheric oscillatory synchrony by means of
    bihemispheric high-density transcranial alternating current stimulation
    (HD-TACS) affects binaural integration of dichotic acoustic features.
    Here, we aimed to establish a direct link between oscillatory
    synchrony, effective brain connectivity, and binaural integration.
    We experimentally manipulated oscillatory synchrony (using bihemispheric
    γ-TACS with different interhemispheric phase lags) and
    assessed the effect on effective brain connectivity and binaural integration
    (as measured with functional MRI and a dichotic listening
    task, respectively). We found that TACS reduced intrahemispheric
    connectivity within the auditory cortices and antiphase (interhemispheric
    phase lag 180°) TACS modulated connectivity between the
    two auditory cortices. Importantly, the changes in intra- and interhemispheric
    connectivity induced by TACS were correlated with
    changes in perceptual integration. Our results indicate that γ-band
    synchronization between the two auditory cortices plays a functional
    role in binaural integration, supporting the proposed role
    of interregional oscillatory synchrony in perceptual integration.
  • Santin, M., Van Hout, A., & Flecken, M. (2021). Event endings in memory and language. Language, Cognition and Neuroscience, 36(5), 625-648. doi:10.1080/23273798.2020.1868542.

    Abstract

    Memory is fundamental for comprehending and segmenting the flow of activity around us into units called “events”. Here, we investigate the effect of the movement dynamics of actions (ceased, ongoing) and the inner structure of events (with or without object-state change) on people's event memory. Furthermore, we investigate how describing events, and the meaning and form of verb predicates used (denoting a culmination moment, or not, in single verbs or verb-satellite constructions), affects event memory. Before taking a surprise recognition task, Spanish and Mandarin speakers (who lexicalise culmination in different verb predicate forms) watched short videos of events, either in a non-verbal (probe-recognition) or a verbal experiment (event description). Results show that culminated events (i.e. ceased change-of-state events) were remembered best across experiments. Language use showed to enhance memory overall. Further, the form of the verb predicates used for denoting culmination had a moderate effect on memory.
  • Sauppe, S., & Flecken, M. (2021). Speaking for seeing: Sentence structure guides visual event apprehension. Cognition, 206: 104516. doi:10.1016/j.cognition.2020.104516.

    Abstract

    Human experience and communication are centred on events, and event apprehension is a rapid process that draws on the visual perception and immediate categorization of event roles (“who does what to whom”). We demonstrate a role for syntactic structure in visual information uptake for event apprehension. An event structure foregrounding either the agent or patient was activated during speaking, transiently modulating the apprehension of subsequently viewed unrelated events. Speakers of Dutch described pictures with actives and passives (agent and patient foregrounding, respectively). First fixations on pictures of unrelated events that were briefly presented (for 300 ms) next were influenced by the active or passive structure of the previously produced sentence. Going beyond the study of how single words cue object perception, we show that sentence structure guides the viewpoint taken during rapid event apprehension.

    Additional information

    supplementary material
  • Schubotz, L., Holler, J., Drijvers, L., & Ozyurek, A. (2021). Aging and working memory modulate the ability to benefit from visible speech and iconic gestures during speech-in-noise comprehension. Psychological Research, 85, 1997-2011. doi:10.1007/s00426-020-01363-8.

    Abstract

    When comprehending speech-in-noise (SiN), younger and older adults benefit from seeing the speaker’s mouth, i.e. visible speech. Younger adults additionally benefit from manual iconic co-speech gestures. Here, we investigate to what extent younger and older adults benefit from perceiving both visual articulators while comprehending SiN, and whether this is modulated by working memory and inhibitory control. Twenty-eight younger and 28 older adults performed a word recognition task in three visual contexts: mouth blurred (speech-only), visible speech, or visible speech + iconic gesture. The speech signal was either clear or embedded in multitalker babble. Additionally, there were two visual-only conditions (visible speech, visible speech + gesture). Accuracy levels for both age groups were higher when both visual articulators were present compared to either one or none. However, older adults received a significantly smaller benefit than younger adults, although they performed equally well in speech-only and visual-only word recognition. Individual differences in verbal working memory and inhibitory control partly accounted for age-related performance differences. To conclude, perceiving iconic gestures in addition to visible speech improves younger and older adults’ comprehension of SiN. Yet, the ability to benefit from this additional visual information is modulated by age and verbal working memory. Future research will have to show whether these findings extend beyond the single word level.

    Additional information

    supplementary material
  • Slivac, K., Hervais-Adelman, A., Hagoort, P., & Flecken, M. (2021). Linguistic labels cue biological motion perception and misperception. Scientific Reports, 11: 17239. doi:10.1038/s41598-021-96649-1.

    Abstract

    Linguistic labels exert a particularly strong top-down influence on perception. The potency of this influence has been ascribed to their ability to evoke category-diagnostic features of concepts. In doing this, they facilitate the formation of a perceptual template concordant with those features, effectively biasing perceptual activation towards the labelled category. In this study, we employ a cueing paradigm with moving, point-light stimuli across three experiments, in order to examine how the number of biological motion features (form and kinematics) encoded in lexical cues modulates the efficacy of lexical top-down influence on perception. We find that the magnitude of lexical influence on biological motion perception rises as a function of the number of biological motion-relevant features carried by both cue and target. When lexical cues encode multiple biological motion features, this influence is robust enough to mislead participants into reporting erroneous percepts, even when a masking level yielding high performance is used.
  • Trujillo, J. P., Ozyurek, A., Holler, J., & Drijvers, L. (2021). Speakers exhibit a multimodal Lombard effect in noise. Scientific Reports, 11: 16721. doi:10.1038/s41598-021-95791-0.

    Abstract

    In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.

    Additional information

    supplementary material
  • Van Bergen, G., & Hogeweg, L. (2021). Managing interpersonal discourse expectations: a comparative analysis of contrastive discourse particles in Dutch. Linguistics, 59(2), 333-360. doi:10.1515/ling-2021-0020.

    Abstract

    In this article we investigate how speakers manage discourse expectations in dialogue by comparing the meaning and use of three Dutch discourse particles, i.e. wel, toch and eigenlijk, which all express a contrast between their host utterance and a discourse-based expectation. The core meanings of toch, wel and eigenlijk are formally distinguished on the basis of two intersubjective parameters: (i) whether the particle marks alignment or misalignment between speaker and addressee discourse beliefs, and (ii) whether the particle requires an assessment of the addressee’s representation of mutual discourse beliefs. By means of a quantitative corpus study, we investigate to what extent the intersubjective meaning distinctions between wel, toch and eigenlijk are reflected in statistical usage patterns across different social situations. Results suggest that wel, toch and eigenlijk are lexicalizations of distinct generalized politeness strategies when expressing contrast in social interaction. Our findings call for an interdisciplinary approach to discourse particles in order to enhance our understanding of their functions in language.
  • Van Paridon, J., Ostarek, M., Arunkumar, M., & Huettig, F. (2021). Does neuronal recycling result in destructive competition? The influence of learning to read on the recognition of faces. Psychological Science, 32, 459-465. doi:10.1177/0956797620971652.

    Abstract

    Written language, a human cultural invention, is far too recent for dedicated neural
    infrastructure to have evolved in its service. Culturally newly acquired skills (e.g. reading) thus ‘recycle’ evolutionarily older circuits that originally evolved for different, but similar functions (e.g. visual object recognition). The destructive competition hypothesis predicts that this neuronal recycling has detrimental behavioral effects on the cognitive functions a cortical network originally evolved for. In a study with 97 literate, low-literate, and illiterate participants from the same socioeconomic background we find that even after adjusting for cognitive ability and test-taking familiarity, learning to read is associated with an increase, rather than a decrease, in object recognition abilities. These results are incompatible with the claim that neuronal recycling results in destructive competition and consistent with the possibility that learning to read instead fine-tunes general object recognition mechanisms, a hypothesis that needs further neuroscientific investigation.

    Additional information

    supplemental material
  • Vega-Mendoza, M., Pickering, M. J., & Nieuwland, M. S. (2021). Concurrent use of animacy and event-knowledge during comprehension: Evidence from event-related potentials. Neuropsychologia, 152: 107724. doi:10.1016/j.neuropsychologia.2020.107724.

    Abstract

    In two ERP experiments, we investigated whether readers prioritize animacy over real-world event-knowledge during sentence comprehension. We used the paradigm of Paczynski and Kuperberg (2012), who argued that animacy is prioritized based on the observations that the ‘related anomaly effect’ (reduced N400s for context-related anomalous words compared to unrelated words) does not occur for animacy violations, and that animacy violations but not relatedness violations elicit P600 effects. Participants read passive sentences with plausible agents (e.g., The prescription for the mental disorder was written by the psychiatrist) or implausible agents that varied in animacy and semantic relatedness (schizophrenic/guard/pill/fence). In Experiment 1 (with a plausibility judgment task), plausible sentences elicited smaller N400s relative to all types of implausible sentences. Crucially, animate words elicited smaller N400s than inanimate words, and related words elicited smaller N400s than unrelated words, but Bayesian analysis revealed substantial evidence against an interaction between animacy and relatedness. Moreover, at the P600 time-window, we observed more positive ERPs for animate than inanimate words and for related than unrelated words at anterior regions. In Experiment 2 (without judgment task), we observed an N400 effect with animacy violations, but no other effects. Taken together, the results of our experiments fail to support a prioritized role of animacy information over real-world event-knowledge, but they support an interactive, constraint-based view on incremental semantic processing.
  • Willems, R. M., & Peelen, M. V. (2021). How context changes the neural basis of perception and language. iScience, 24(5): 102392. doi:10.1016/j.isci.2021.102392.

    Abstract

    Cognitive processes—from basic sensory analysis to language understanding—are typically contextualized. While the importance of considering context for understanding cognition has long been recognized in psychology and philosophy, it has not yet had much impact on cognitive neuroscience research, where cognition is often studied in decontextualized paradigms. Here, we present examples of recent studies showing that context changes the neural basis of diverse cognitive processes, including perception, attention, memory, and language. Within the domains of perception and language, we review neuroimaging results showing that context interacts with stimulus processing, changes activity in classical perception and language regions, and recruits additional brain regions that contribute crucially to naturalistic perception and language. We discuss how contextualized cognitive neuroscience will allow for discovering new principles of the mind and brain.
  • Zora, H., & Csépe, V. (2021). Perception of Prosodic Modulations of Linguistic and Paralinguistic Origin: Evidence From Early Auditory Event-Related Potentials. Frontiers in Neuroscience, 15: 797487. doi:10.3389/fnins.2021.797487.

    Abstract

    How listeners handle prosodic cues of linguistic and paralinguistic origin is a central question for spoken communication. In the present EEG study, we addressed this question by examining neural responses to variations in pitch accent (linguistic) and affective (paralinguistic) prosody in Swedish words, using a passive auditory oddball paradigm. The results indicated that changes in pitch accent and affective prosody elicited mismatch negativity (MMN) responses at around 200 ms, confirming the brain’s pre-attentive response to any prosodic modulation. The MMN amplitude was, however, statistically larger to the deviation in affective prosody in comparison to the deviation in pitch accent and affective prosody combined, which is in line with previous research indicating not only a larger MMN response to affective prosody in comparison to neutral prosody but also a smaller MMN response to multidimensional deviants than unidimensional ones. The results, further, showed a significant P3a response to the affective prosody change in comparison to the pitch accent change at around 300 ms, in accordance with previous findings showing an enhanced positive response to emotional stimuli. The present findings provide evidence for distinct neural processing of different prosodic cues, and statistically confirm the intrinsic perceptual and motivational salience of paralinguistic information in spoken communication.
  • Araújo, S., Faísca, L., Reis, A., Marques, J. F., & Petersson, K. M. (2016). Visual naming deficits in dyslexia: An ERP investigation of different processing domains. Neuropsychologia, 91, 61-76. doi:10.1016/j.neuropsychologia.2016.07.007.

    Abstract

    Naming speed deficits are well documented in developmental dyslexia, expressed by slower naming times and more errors in response to familiar items. Here we used event-related potentials (ERPs) to examine at what processing level the deficits in dyslexia emerge during a discrete-naming task. Dyslexic and skilled adult control readers performed a primed object-naming task, in which the relationship between the prime and the target was manipulated along perceptual, semantic and phonological dimensions. A 3×2 design that crossed Relationship Type (Visual, Phonemic Onset, and Semantic) with Relatedness (Related and Unrelated) was used. An attenuated N/P190 – indexing early visual processing – and N300 – which index late visual processing – was observed to pictures preceded by perceptually related (vs. unrelated) primes in the control but not in the dyslexic group. These findings suggest suboptimal processing in early stages of object processing in dyslexia, when integration and mapping of perceptual information to a more form-specific percept in memory take place. On the other hand, both groups showed an N400 effect associated with semantically related pictures (vs. unrelated), taken to reflect intact integration of semantic similarities in both dyslexic and control readers. We also found an electrophysiological effect of phonological priming in the N400 range – that is, an attenuated N400 to objects preceded by phonemic related primes vs. unrelated – while it showed a more widespread distributed and more pronounced over the right hemisphere in the dyslexics. Topographic differences between groups might have originated from a word form encoding process with different characteristics in dyslexics compared to control readers.
  • Asaridou, S. S., Takashima, A., Dediu, D., Hagoort, P., & McQueen, J. M. (2016). Repetition suppression in the left inferior frontal gyrus predicts tone learning performance. Cerebral Cortex, 26(6), 2728-2742. doi:10.1093/cercor/bhv126.

    Abstract

    Do individuals differ in how efficiently they process non-native sounds? To what extent do these differences relate to individual variability in sound-learning aptitude? We addressed these questions by assessing the sound-learning abilities of Dutch native speakers as they were trained on non-native tone contrasts. We used fMRI repetition suppression to the non-native tones to measure participants' neuronal processing efficiency before and after training. Although all participants improved in tone identification with training, there was large individual variability in learning performance. A repetition suppression effect to tone was found in the bilateral inferior frontal gyri (IFGs) before training. No whole-brain effect was found after training; a region-of-interest analysis, however, showed that, after training, repetition suppression to tone in the left IFG correlated positively with learning. That is, individuals who were better in learning the non-native tones showed larger repetition suppression in this area. Crucially, this was true even before training. These findings add to existing evidence that the left IFG plays an important role in sound learning and indicate that individual differences in learning aptitude stem from differences in the neuronal efficiency with which non-native sounds are processed.
  • Backus, A., Schoffelen, J.-M., Szebényi, S., Hanslmayr, S., & Doeller, C. (2016). Hippocampal-prefrontal theta oscillations support memory integration. Current Biology, 26, 450-457. doi:10.1016/j.cub.2015.12.048.

    Abstract

    Integration of separate memories forms the basis of inferential reasoning - an essential cognitive process that enables complex behavior. Considerable evidence suggests that both hippocampus and medial prefrontal cortex (mPFC) play a crucial role in memory integration. Although previous studies indicate that theta oscillations facilitate memory processes, the electrophysiological mechanisms underlying memory integration remain elusive. To bridge this gap, we recorded magnetoencephalography data while participants performed an inference task and employed novel source reconstruction techniques to estimate oscillatory signals from the hippocampus. We found that hippocampal theta power during encoding predicts subsequent memory integration. Moreover, we observed increased theta coherence between hippocampus and mPFC. Our results suggest that integrated memory representations arise through hippocampal theta oscillations, possibly reflecting dynamic switching between encoding and retrieval states, and facilitating communication with mPFC. These findings have important implications for our understanding of memory-based decision making and knowledge acquisition
  • Bastos, A. M., & Schoffelen, J.-M. (2016). A tutorial review of functional connectivity analysis methods and their interpretational pitfalls. Frontiers in Systems Neuroscience, 9: 175. doi:10.3389/fnsys.2015.00175.

    Abstract

    Oscillatory neuronal activity may provide a mechanism for dynamic network coordination. Rhythmic neuronal interactions can be quantified using multiple metrics, each with their own advantages and disadvantages. This tutorial will review and summarize current analysis methods used in the field of invasive and non-invasive electrophysiology to study the dynamic connections between neuronal populations. First, we review metrics for functional connectivity, including coherence, phase synchronization, phase-slope index, and Granger causality, with the specific aim to provide an intuition for how these metrics work, as well as their quantitative definition. Next, we highlight a number of interpretational caveats and common pitfalls that can arise when performing functional connectivity analysis, including the common reference problem, the signal to noise ratio problem, the volume conduction problem, the common input problem, and the sample size bias problem. These pitfalls will be illustrated by presenting a set of MATLAB-scripts, which can be executed by the reader to simulate each of these potential problems. We discuss how these issues can be addressed using current methods.
  • Bramão, I., Reis, A., Petersson, K. M., & Faísca, L. (2016). Knowing that strawberries are red and seeing red strawberries: The interaction between surface colour and colour knowledge information. Journal of Cognitive Psychology, 28(6), 641-657. doi:10.1080/20445911.2016.1182171.

    Abstract

    his study investigates the interaction between surface and colour knowledge information during object recognition. In two different experiments, participants were instructed to decide whether two presented stimuli belonged to the same object identity. On the non-matching trials, we manipulated the shape and colour knowledge information activated by the two stimuli by creating four different stimulus pairs: (1) similar in shape and colour (e.g. TOMATO–APPLE); (2) similar in shape and dissimilar in colour (e.g. TOMATO–COCONUT); (3) dissimilar in shape and similar in colour (e.g. TOMATO–CHILI PEPPER) and (4) dissimilar in both shape and colour (e.g. TOMATO–PEANUT). The object pictures were presented in typical and atypical colours and also in black-and-white. The interaction between surface and colour knowledge showed to be contingent upon shape information: while colour knowledge is more important for recognising structurally similar shaped objects, surface colour is more prominent for recognising structurally dissimilar shaped objects.
  • Broersma, M., Carter, D., & Acheson, D. J. (2016). Cognate costs in bilingual speech production: Evidence from language switching. Frontiers in Psychology, 7: 1461. doi:10.3389/fpsyg.2016.01461.

    Abstract

    This study investigates cross-language lexical competition in the bilingual mental lexicon. It provides evidence for the occurrence of inhibition as well as the commonly reported facilitation during the production of cognates (words with similar phonological form and meaning in two languages) in a mixed picture naming task by highly proficient Welsh-English bilinguals. Previous studies have typically found cognate facilitation. It has previously been proposed (with respect to non-cognates) that cross-language inhibition is limited to low-proficient bilinguals; therefore, we tested highly proficient, early bilinguals. In a mixed naming experiment (i.e., picture naming with language switching), 48 highly proficient, early Welsh-English bilinguals named pictures in Welsh and English, including cognate and non-cognate targets. Participants were English-dominant, Welsh-dominant, or had equal language dominance. The results showed evidence for cognate inhibition in two ways. First, both facilitation and inhibition were found on the cognate trials themselves, compared to non-cognate controls, modulated by the participants' language dominance. The English-dominant group showed cognate inhibition when naming in Welsh (and no difference between cognates and controls when naming in English), and the Welsh-dominant and equal dominance groups generally showed cognate facilitation. Second, cognate inhibition was found as a behavioral adaptation effect, with slower naming for non-cognate filler words in trials after cognates than after non-cognate controls. This effect was consistent across all language dominance groups and both target languages, suggesting that cognate production involved cognitive control even if this was not measurable in the cognate trials themselves. Finally, the results replicated patterns of symmetrical switch costs, as commonly reported for balanced bilinguals. We propose that cognate processing might be affected by two different processes, namely competition at the lexical-semantic level and facilitation at the word form level, and that facilitation at the word form level might (sometimes) outweigh any effects of inhibition at the lemma level. In sum, this study provides evidence that cognate naming can cause costs in addition to benefits. The finding of cognate inhibition, particularly for the highly proficient bilinguals tested, provides strong evidence for the occurrence of lexical competition across languages in the bilingual mental lexicon.
  • Chu, M., & Kita, S. (2016). Co-thought and Co-speech Gestures Are Generated by the Same Action Generation Process. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(2), 257-270. doi:10.1037/xlm0000168.

    Abstract

    People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments 1 and 2). This suggests that the 2 types of gestures are generated from the same process. We then investigated whether both types of gestures can be generated from the representational use of the action generation process that also generates purposeful actions that have a direct physical impact on the world, such as manipulating an object or locomotion (the action generation hypothesis). To this end, we examined the effect of object affordances on the production of both types of gestures (Experiments 3 and 4). We found that individuals produced co-thought and co-speech gestures more often when the stimulus objects afforded action (objects with a smooth surface) than when they did not (objects with a spiky surface). These results support the action generation hypothesis for representational gestures. However, our findings are incompatible with the hypothesis that co-speech representational gestures are solely generated from the speech production process (the speech production hypothesis).
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2016). Beat that word: How listeners integrate beat gesture and focus in multimodal speech discourse. Journal of Cognitive Neuroscience, 28(9), 1255-1269. doi:10.1162/jocn_a_00963.

    Abstract

    Communication is facilitated when listeners allocate their attention to important information (focus) in the message, a process called "information structure." Linguistic cues like the preceding context and pitch accent help listeners to identify focused information. In multimodal communication, relevant information can be emphasized by nonverbal cues like beat gestures, which represent rhythmic nonmeaningful hand movements. Recent studies have found that linguistic and nonverbal attention cues are integrated independently in single sentences. However, it is possible that these two cues interact when information is embedded in context, because context allows listeners to predict what information is important. In an ERP study, we tested this hypothesis and asked listeners to view videos capturing a dialogue. In the critical sentence, focused and nonfocused words were accompanied by beat gestures, grooming hand movements, or no gestures. ERP results showed that focused words are processed more attentively than nonfocused words as reflected in an N1 and P300 component. Hand movements also captured attention and elicited a P300 component. Importantly, beat gesture and focus interacted in a late time window of 600-900 msec relative to target word onset, giving rise to a late positivity when nonfocused words were accompanied by beat gestures. Our results show that listeners integrate beat gesture with the focus of the message and that integration costs arise when beat gesture falls on nonfocused information. This suggests that beat gestures fulfill a unique focusing function in multimodal discourse processing and that they have to be integrated with the information structure of the message.
  • Frank, S. L., & Fitz, H. (2016). Reservoir computing and the Sooner-is-Better bottleneck [Commentary on Christiansen & Slater]. Behavioral and Brain Sciences, 39: e73. doi:10.1017/S0140525X15000783.

    Abstract

    Prior language input is not lost but integrated with the current input. This principle is demonstrated by “reservoir computing”: Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space. Earlier inputs can be retrieved from this projection, albeit less reliably so as more input is received. The bottleneck is therefore not “Now-or-Never” but “Sooner-is-Better.
  • Gijssels, T., Staum Casasanto, L., Jasmin, K., Hagoort, P., & Casasanto, D. (2016). Speech accommodation without priming: The case of pitch. Discourse Processes, 53(4), 233-251. doi:10.1080/0163853X.2015.1023965.

    Abstract

    People often accommodate to each other's speech by aligning their linguistic production with their partner's. According to an influential theory, the Interactive Alignment Model (Pickering & Garrod, 2004), alignment is the result of priming. When people perceive an utterance, the corresponding linguistic representations are primed, and become easier to produce. Here we tested this theory by investigating whether pitch (F0) alignment shows two characteristic signatures of priming: dose dependence and persistence. In a virtual reality experiment, we manipulated the pitch of a virtual interlocutor's speech to find out (a.) whether participants accommodated to the agent's F0, (b.) whether the amount of accommodation increased with increasing exposure to the agent's speech, and (c.) whether changes to participants' F0 persisted beyond the conversation. Participants accommodated to the virtual interlocutor, but accommodation did not increase in strength over the conversation, and it disappeared immediately after the conversation ended. Results argue against a priming-based account of F0 accommodation, and indicate that an alternative mechanism is needed to explain alignment along continuous dimensions of language such as speech rate and pitch.
  • Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2016). Taking perspective: Personal pronouns affect experiential aspects of literary reading. PLoS One, 11(5): e0154732. doi:10.1371/journal.pone.0154732.

    Abstract

    Personal pronouns have been shown to influence cognitive perspective taking during comprehension. Studies using single sentences found that 3rd person pronouns facilitate the construction of a mental model from an observer’s perspective, whereas 2nd person pronouns support an actor’s perspective. The direction of the effect for 1st person pronouns seems to depend on the situational context. In the present study, we investigated how personal pronouns influence discourse comprehension when people read fiction stories and if this has consequences for affective components like emotion during reading or appreciation of the story. We wanted to find out if personal pronouns affect immersion and arousal, as well as appreciation of fiction. In a natural reading paradigm, we measured electrodermal activity and story immersion, while participants read literary stories with 1st and 3rd person pronouns referring to the protagonist. In addition, participants rated and ranked the stories for appreciation. Our results show that stories with 1st person pronouns lead to higher immersion. Two factors—transportation into the story world and mental imagery during reading—in particular showed higher scores for 1st person as compared to 3rd person pronoun stories. In contrast, arousal as measured by electrodermal activity seemed tentatively higher for 3rd person pronoun stories. The two measures of appreciation were not affected by the pronoun manipulation. Our findings underscore the importance of perspective for language processing, and additionally show which aspects of the narrative experience are influenced by a change in perspective.
  • Kösem, A., Basirat, A., Azizi, L., & van Wassenhove, V. (2016). High frequency neural activity predicts word parsing in ambiguous speech streams. Journal of Neurophysiology, 116(6), 2497-2512. doi:10.1152/jn.00074.2016.

    Abstract

    During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g. syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses propose that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant’s conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. While changes in low-frequency neural oscillations were compatible with the encoding of pre-lexical segmentation cues, high-frequency activity specifically informed on an individual’s conscious speech percept.

    Files private

    Request files
  • Kunert, R., Willems, R. M., & Hagoort, P. (2016). An independent psychometric evaluation of the PROMS measure of music perception skills. PLoS One, 11(7): e0159103. doi:10.1371/journal.pone.0159103.

    Abstract

    The Profile of Music Perception Skills (PROMS) is a recently developed measure of perceptual music skills which has been shown to have promising psychometric properties. In this paper we extend the evaluation of its brief version to three kinds of validity using an individual difference approach. The brief PROMS displays good discriminant validity with working memory, given that it does not correlate with backward digit span (r = .04). Moreover, it shows promising criterion validity (association with musical training (r = .45), musicianship status (r = .48), and self-rated musical talent (r = .51)). Finally, its convergent validity, i.e. relation to an unrelated measure of music perception skills, was assessed by correlating the brief PROMS to harmonic closure judgment accuracy. Two independent samples point to good convergent validity of the brief PROMS (r = .36; r = .40). The same association is still significant in one of the samples when including self-reported music skill in a partial correlation (rpartial = .30; rpartial = .17). Overall, the results show that the brief version of the PROMS displays a very good pattern of construct validity. Especially its tuning subtest stands out as a valuable part for music skill evaluations in Western samples. We conclude by briefly discussing the choice faced by music cognition researchers between different musical aptitude measures of which the brief PROMS is a well evaluated example.
  • Kunert, R., Willems, R. M., & Hagoort, P. (2016). Language influences music harmony perception: effects of shared syntactic integration resources beyond attention. Royal Society Open Science, 3(2): 150685. doi:10.1098/rsos.150685.

    Abstract

    Many studies have revealed shared music–language processing resources by finding an influence of music harmony manipulations on concurrent language processing. However, the nature of the shared resources has remained ambiguous. They have been argued to be syntax specific and thus due to shared syntactic integration resources. An alternative view regards them as related to general attention and, thus, not specific to syntax. The present experiments evaluated these accounts by investigating the influence of language on music. Participants were asked to provide closure judgements on harmonic sequences in order to assess the appropriateness of sequence endings. At the same time participants read syntactic garden-path sentences. Closure judgements revealed a change in harmonic processing as the result of reading a syntactically challenging word. We found no influence of an arithmetic control manipulation (experiment 1) or semantic garden-path sentences (experiment 2). Our results provide behavioural evidence for a specific influence of linguistic syntax processing on musical harmony judgements. A closer look reveals that the shared resources appear to be needed to hold a harmonic key online in some form of syntactic working memory or unification workspace related to the integration of chords and words. Overall, our results support the syntax specificity of shared music–language processing resources.
  • Kunert, R. (2016). Internal conceptual replications do not increase independent replication success. Psychonomic Bulletin & Review, 23(5), 1631-1638. doi:10.3758/s13423-016-1030-9.

    Abstract

    Recently, many psychological effects have been surprisingly difficult to reproduce. This article asks why, and investigates whether conceptually replicating an effect in the original publication is related to the success of independent, direct replications. Two prominent accounts of low reproducibility make different predictions in this respect. One account suggests that psychological phenomena are dependent on unknown contexts that are not reproduced in independent replication attempts. By this account, internal replications indicate that a finding is more robust and, thus, that it is easier to independently replicate it. An alternative account suggests that researchers employ questionable research practices (QRPs), which increase false positive rates. By this account, the success of internal replications may just be the result of QRPs and, thus, internal replications are not predictive of independent replication success. The data of a large reproducibility project support the QRP account: replicating an effect in the original publication is not related to independent replication success. Additional analyses reveal that internally replicated and internally unreplicated effects are not very different in terms of variables associated with replication success. Moreover, social psychological effects in particular appear to lack any benefit from internal replications. Overall, these results indicate that, in this dataset at least, the influence of QRPs is at the heart of failures to replicate psychological findings, especially in social psychology. Variable, unknown contexts appear to play only a relatively minor role. I recommend practical solutions for how QRPs can be avoided.

    Additional information

    13423_2016_1030_MOESM1_ESM.pdf
  • Lai, V. T., & Huettig, F. (2016). When prediction is fulfilled: Insight from emotion processing. Neuropsychologia, 85, 110-117. doi:10.1016/j.neuropsychologia.2016.03.014.

    Abstract

    Research on prediction in language processing has focused predominantly on the function of predictive context and less on the potential contribution of the predicted word. The present study investigated how meaning that is not immediately prominent in the contents of predictions but is part of the predicted words influences sentence processing. We used emotional meaning to address this question. Participants read emotional and neutral words embedded in highly predictive and non-predictive sentential contexts, with the two sentential contexts rated similarly for their emotional ratings. Event Related Potential (ERP) effects of prediction and emotion both started at ~200 ms. Confirmed predictions elicited larger P200s than violated predictions when the target words were non-emotional (neutral), but such effect was absent when the target words were emotional. Likewise, emotional words elicited larger P200s than neutral words when the target words were non-predictive, but such effect were absent when the contexts were predictive. We conjecture that the prediction and emotion effects at ~200 ms may share similar neural process(es). We suggest that such process(es) could be affective, where confirmed predictions and word emotion give rise to ‘aha’ or reward feelings, and/or cognitive, where both prediction and word emotion quickly engage attention

    Additional information

    Lai_Huettig_2016_supp.xlsx
  • Lam, N. H. L., Schoffelen, J.-M., Udden, J., Hulten, A., & Hagoort, P. (2016). Neural activity during sentence processing as reflected in theta, alpha, beta and gamma oscillations. NeuroImage, 142(15), 43-54. doi:10.1016/j.neuroimage.2016.03.007.

    Abstract

    We used magnetoencephalography (MEG) to explore the spatio-temporal dynamics of neural oscillations associated with sentence processing, in 102 participants. We quantified changes in oscillatory power as the sentence unfolded, and in response to individual words in the sentence. For words early in a sentence compared to those late in the same sentence, we observed differences in left temporal and frontal areas, and bilateral frontal and right parietal regions for the theta, alpha, and beta frequency bands. The neural response to words in a sentence differed from the response to words in scrambled sentences in left-lateralized theta, alpha, beta, and gamma. The theta band effects suggest that a sentential context facilitates lexical retrieval, and that this facilitation is stronger for words late in the sentence. Effects in the alpha and beta band may reflect the unification of semantic and syntactic information, and are suggestive of easier unification late in a sentence. The gamma oscillations are indicative of predicting the upcoming word during sentence processing. In conclusion, changes in oscillatory neuronal activity capture aspects of sentence processing. Our results support earlier claims that language (sentence) processing recruits areas distributed across both hemispheres, and extends beyond the classical language regions
  • Leonard, M., Baud, M., Sjerps, M. J., & Chang, E. (2016). Perceptual restoration of masked speech in human cortex. Nature Communications, 7: 13619. doi:10.1038/ncomms13619.

    Abstract

    Humans are adept at understanding speech despite the fact that our natural listening environment is often filled with interference. An example of this capacity is phoneme restoration, in which part of a word is completely replaced by noise, yet listeners report hearing the whole word. The neurological basis for this unconscious fill-in phenomenon is unknown, despite being a fundamental characteristic of human hearing. Here, using direct cortical recordings in humans, we demonstrate that missing speech is restored at the acoustic-phonetic level in bilateral auditory cortex, in real-time. This restoration is preceded by specific neural activity patterns in a separate language area, left frontal cortex, which predicts the word that participants later report hearing. These results demonstrate that during speech perception, missing acoustic content is synthesized online from the integration of incoming sensory cues and the internal neural dynamics that bias word-level expectation and prediction.

    Additional information

    ncomms13619-s1.pdf
  • Lewis, A. G., Schoffelen, J.-M., Schriefers, H., & Bastiaansen, M. C. M. (2016). A Predictive Coding Perspective on Beta Oscillations during Sentence-Level Language Comprehension. Frontiers in Human Neuroscience, 10: 85. doi:10.3389/fnhum.2016.00085.

    Abstract

    Oscillatory neural dynamics have been steadily receiving more attention as a robust and temporally precise signature of network activity related to language processing. We have recently proposed that oscillatory dynamics in the beta and gamma frequency ranges measured during sentence-level comprehension might be best explained from a predictive coding perspective. Under our proposal we related beta oscillations to both the maintenance/change of the neural network configuration responsible for the construction and representation of sentence-level meaning, and to top–down predictions about upcoming linguistic input based on that sentence-level meaning. Here we zoom in on these particular aspects of our proposal, and discuss both old and new supporting evidence. Finally, we present some preliminary magnetoencephalography data from an experiment comparing Dutch subject- and object-relative clauses that was specifically designed to test our predictive coding framework. Initial results support the first of the two suggested roles for beta oscillations in sentence-level language comprehension.
  • Lewis, A. G., Lemhӧfer, K., Schoffelen, J.-M., & Schriefers, H. (2016). Gender agreement violations modulate beta oscillatory dynamics during sentence comprehension: A comparison of second language learners and native speakers. Neuropsychologia, 89(1), 254-272. doi:10.1016/j.neuropsychologia.2016.06.031.

    Abstract

    For native speakers, many studies suggest a link between oscillatory neural activity in the beta frequency range and syntactic processing. For late second language (L2) learners on the other hand, the extent to which the neural architecture supporting syntactic processing is similar to or different from that of native speakers is still unclear. In a series of four experiments, we used electroencephalography to investigate the link between beta oscillatory activity and the processing of grammatical gender agreement in Dutch determiner-noun pairs, for Dutch native speakers, and for German L2 learners of Dutch. In Experiment 1 we show that for native speakers, grammatical gender agreement violations are yet another among many syntactic factors that modulate beta oscillatory activity during sentence comprehension. Beta power is higher for grammatically acceptable target words than for those that mismatch in grammatical gender with their preceding determiner. In Experiment 2 we observed no such beta modulations for L2 learners, irrespective of whether trials were sorted according to objective or subjective syntactic correctness. Experiment 3 ruled out that the absence of a beta effect for the L2 learners in Experiment 2 was due to repetition of the target nouns in objectively correct and incorrect determiner-noun pairs. Finally, Experiment 4 showed that when L2 learners are required to explicitly focus on grammatical information, they show modulations of beta oscillatory activity, comparable to those of native speakers, but only when trials are sorted according to participants’ idiosyncratic lexical representations of the grammatical gender of target nouns. Together, these findings suggest that beta power in L2 learners is sensitive to violations of grammatical gender agreement, but only when the importance of grammatical information is highlighted, and only when participants' subjective lexical representations are taken into account.
  • Lockwood, G. (2016). Academic clickbait: Articles with positively-framed titles, interesting phrasing, and no wordplay get more attention online. The Winnower, 3: e146723.36330. doi:10.15200/winn.146723.36330.

    Abstract

    This article is about whether the factors which drive online sharing of non-scholarly content also apply to academic journal titles. It uses Altmetric scores as a measure of online attention to articles from Frontiers in Psychology published in 2013 and 2014. Article titles with result-oriented positive framing and more interesting phrasing receive higher Altmetric scores, i.e., get more online attention. Article titles with wordplay and longer article titles receive lower Altmetric scores. This suggests that the same factors that affect how widely non-scholarly content is shared extend to academia, which has implications for how academics can make their work more likely to have more impact.
  • Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). How iconicity helps people learn new words: neural correlates and individual differences in sound-symbolic bootstrapping. Collabra, 2(1): 7. doi:10.1525/collabra.42.

    Abstract

    Sound symbolism is increasingly understood as involving iconicity, or perceptual analogies and cross-modal correspondences between form and meaning, but the search for its functional and neural correlates is ongoing. Here we study how people learn sound-symbolic words, using behavioural, electrophysiological and individual difference measures. Dutch participants learned Japanese ideophones —lexical sound-symbolic words— with a translation of either the real meaning (in which form and meaning show cross-modal correspondences) or the opposite meaning (in which form and meaning show cross-modal clashes). Participants were significantly better at identifying the words they learned in the real condition, correctly remembering the real word pairing 86.7% of the time, but the opposite word pairing only 71.3% of the time. Analysing event-related potentials (ERPs) during the test round showed that ideophones in the real condition elicited a greater P3 component and late positive complex than ideophones in the opposite condition. In a subsequent forced choice task, participants were asked to guess the real translation from two alternatives. They did this with 73.0% accuracy, well above chance level even for words they had encountered in the opposite condition, showing that people are generally sensitive to the sound-symbolic cues in ideophones. Individual difference measures showed that the ERP effect in the test round of the learning task was greater for participants who were more sensitive to sound symbolism in the forced choice task. The main driver of the difference was a lower amplitude of the P3 component in response to ideophones in the opposite condition, suggesting that people who are more sensitive to sound symbolism may have more difficulty to suppress conflicting cross-modal information. The findings provide new evidence that cross-modal correspondences between sound and meaning facilitate word learning, while cross-modal clashes make word learning harder, especially for people who are more sensitive to sound symbolism.

    Additional information

    https://osf.io/ema3t/
  • Lockwood, G., Dingemanse, M., & Hagoort, P. (2016). Sound-symbolism boosts novel word learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(8), 1274-1281. doi:10.1037/xlm0000235.

    Abstract

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory information, to investigate how sensitive Dutch speakers are to sound-symbolism in Japanese in a learning task. Participants were taught 2 sets of Japanese ideophones; 1 set with the ideophones’ real meanings in Dutch, the other set with their opposite meanings. In Experiment 1, participants learned the ideophones and their real meanings much better than the ideophones with their opposite meanings. Moreover, despite the learning rounds, participants were still able to guess the real meanings of the ideophones in a 2-alternative forced-choice test after they were informed of the manipulation. This shows that natural language sound-symbolism is robust beyond 2-alternative forced-choice paradigms and affects broader language processes such as word learning. In Experiment 2, participants learned regular Japanese adjectives with the same manipulation, and there was no difference between real and opposite conditions. This shows that natural language sound-symbolism is especially strong in ideophones, and that people learn words better when form and meaning match. The highlights of this study are as follows: (a) Dutch speakers learn real meanings of Japanese ideophones better than opposite meanings, (b) Dutch speakers accurately guess meanings of Japanese ideophones, (c) this sensitivity happens despite learning some opposite pairings, (d) no such learning effect exists for regular Japanese adjectives, and (e) this shows the importance of sound-symbolism in scaffolding language learning
  • Michalareas, G., Vezoli, J., Van Pelt, S., Schoffelen, J.-M., Kennedy, H., & Fries, P. (2016). Alpha-Beta and Gamma Rhythms Subserve Feedback and Feedforward Influences among Human Visual Cortical Areas. Neuron, 82(2), 384-397. doi:10.1016/j.neuron.2015.12.018.

    Abstract

    Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and we correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography. We show that influences along feedforward projections predominate in the gamma band, whereas influences along feedback projections predominate in the alpha-beta band. Rhythmic inter-areal influences constrain a functional hierarchy of the seven homologous human visual areas that is in close agreement with the respective macaque anatomical hierarchy. Rhythmic influences allow an extension of the hierarchy to 26 human visual areas including uniquely human brain areas. Hierarchical levels of ventral- and dorsal-stream visual areas are differentially affected by inter-areal influences in the alpha-beta band.
  • Peeters, D., & Ozyurek, A. (2016). This and that revisited: A social and multimodal approach to spatial demonstratives. Frontiers in Psychology, 7: 222. doi:10.3389/fpsyg.2016.00222.
  • Poletiek, F. H., Fitz, H., & Bocanegra, B. R. (2016). What baboons can (not) tell us about natural language grammars. Cognition, 151, 108-112. doi:10.1016/j.cognition.2015.04.016.

    Abstract

    Rey et al. (2012) present data from a study with baboons that they interpret in support of the idea that center-embedded structures in human language have their origin in low level memory mechanisms and associative learning. Critically, the authors claim that the baboons showed a behavioral preference that is consistent with center-embedded sequences over other types of sequences. We argue that the baboons’ response patterns suggest that two mechanisms are involved: first, they can be trained to associate a particular response with a particular stimulus, and, second, when faced with two conditioned stimuli in a row, they respond to the most recent one first, copying behavior they had been rewarded for during training. Although Rey et al. (2012) ‘experiment shows that the baboons’ behavior is driven by low level mechanisms, it is not clear how the animal behavior reported, bears on the phenomenon of Center Embedded structures in human syntax. Hence, (1) natural language syntax may indeed have been shaped by low level mechanisms, and (2) the baboons’ behavior is driven by low level stimulus response learning, as Rey et al. propose. But is the second evidence for the first? We will discuss in what ways this study can and cannot give evidential value for explaining the origin of Center Embedded recursion in human grammar. More generally, their study provokes an interesting reflection on the use of animal studies in order to understand features of the human linguistic system.
  • Schoot, L., Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Does syntactic alignment effectively influence how speakers are perceived by their conversation partner. PLoS One, 11(4): e015352. doi:10.1371/journal.pone.0153521.

    Abstract

    The way we talk can influence how we are perceived by others. Whereas previous studies have started to explore the influence of social goals on syntactic alignment, in the current study, we additionally investigated whether syntactic alignment effectively influences conversation partners’ perception of the speaker. To this end, we developed a novel paradigm in which we can measure the effect of social goals on the strength of syntactic alignment for one participant (primed participant), while simultaneously obtaining usable social opinions about them from their conversation partner (the evaluator). In Study 1, participants’ desire to be rated favorably by their partner was manipulated by assigning pairs to a Control (i.e., primed participants did not know they were being evaluated) or Evaluation context (i.e., primed participants knew they were being evaluated). Surprisingly, results showed no significant difference in the strength with which primed participants aligned their syntactic choices with their partners’ choices. In a follow-up study, we used a Directed Evaluation context (i.e., primed participants knew they were being evaluated and were explicitly instructed to make a positive impression). However, again, there was no evidence supporting the hypothesis that participants’ desire to impress their partner influences syntactic alignment. With respect to the influence of syntactic alignment on perceived likeability by the evaluator, a negative relationship was reported in Study 1: the more primed participants aligned their syntactic choices with their partner, the more that partner decreased their likeability rating after the experiment. However, this effect was not replicated in the Directed Evaluation context of Study 2. In other words, our results do not support the conclusion that speakers’ desire to be liked affects how much they align their syntactic choices with their partner, nor is there convincing evidence that there is a reliable relationship between syntactic alignment and perceived likeability.

    Additional information

    Data availability
  • Schoot, L., Hagoort, P., & Segaert, K. (2016). What can we learn from a two-brain approach to verbal interaction? Neuroscience and Biobehavioral Reviews, 68, 454-459. doi:10.1016/j.neubiorev.2016.06.009.

    Abstract

    Verbal interaction is one of the most frequent social interactions humans encounter on a daily basis. In the current paper, we zoom in on what the multi-brain approach has contributed, and can contribute in the future, to our understanding of the neural mechanisms supporting verbal interaction. Indeed, since verbal interaction can only exist between individuals, it seems intuitive to focus analyses on inter-individual neural markers, i.e. between-brain neural coupling. To date, however, there is a severe lack of theoretically-driven, testable hypotheses about what between-brain neural coupling actually reflects. In this paper, we develop a testable hypothesis in which between-pair variation in between-brain neural coupling is of key importance. Based on theoretical frameworks and empirical data, we argue that the level of between-brain neural coupling reflects speaker-listener alignment at different levels of linguistic and extra-linguistic representation. We discuss the possibility that between-brain neural coupling could inform us about the highest level of inter-speaker alignment: mutual understanding
  • Segaert, K., Wheeldon, L., & Hagoort, P. (2016). Unifying structural priming effects on syntactic choices and timing of sentence generation. Journal of Memory and Language, 91, 59-80. doi:10.1016/j.jml.2016.03.011.

    Abstract

    We investigated whether structural priming of production latencies is sensitive to the same factors known to influence persistence of structural choices: structure preference, cumulativity and verb repetition. In two experiments, we found structural persistence only for passives (inverse preference effect) while priming effects on latencies were stronger for the actives (positive preference effect). We found structural persistence for passives to be influenced by immediate primes and long lasting cumulativity (all preceding primes) (Experiment 1), and to be boosted by verb repetition (Experiment 2). In latencies we found effects for actives were sensitive to long lasting cumulativity (Experiment 1). In Experiment 2, in latencies we found priming for actives overall, while for passives the priming effects emerged as the cumulative exposure increased but only when also aided by verb repetition. These findings are consistent with the Two-stage Competition model, an integrated model of structural priming effects for sentence choice and latency
  • Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2016). Using Brain Potentials to Functionally Localise Stroop-Like Effects in Colour and Picture Naming: Perceptual Encoding versus Word Planning. PLoS One, 11(9): e0161052. doi:10.1371/journal.pone.0161052.

    Abstract

    The colour-word Stroop task and the picture-word interference task (PWI) have been used extensively to study the functional processes underlying spoken word production. One of the consistent behavioural effects in both tasks is the Stroop-like effect: The reaction time (RT) is longer on incongruent trials than on congruent trials. The effect in the Stroop task is usually linked to word planning, whereas the effect in the PWI task is associated with either word planning or perceptual encoding. To adjudicate between the word planning and perceptual encoding accounts of the effect in PWI, we conducted an EEG experiment consisting of three tasks: a standard colour-word Stroop task (three colours), a standard PWI task (39 pictures), and a Stroop-like version of the PWI task (three pictures). Participants overtly named the colours and pictures while their EEG was recorded. A Stroop-like effect in RTs was observed in all three tasks. ERPs at centro-parietal sensors started to deflect negatively for incongruent relative to congruent stimuli around 350 ms after stimulus onset for the Stroop, Stroop-like PWI, and the Standard PWI tasks: an N400 effect. No early differences were found in the PWI tasks. The onset of the Stroop-like effect at about 350 ms in all three tasks links the effect to word planning rather than perceptual encoding, which has been estimated in the literature to be finished around 200–250 ms after stimulus onset. We conclude that the Stroop-like effect arises during word planning in both Stroop and PWI.
  • Silva, S., Reis, A., Casaca, L., Petersson, K. M., & Faísca, L. (2016). When the eyes no longer lead: Familiarity and length effects eye-voice span. Frontiers in Psychology, 7: 1720. doi:10.3389/fpsyg.2016.01720.

    Abstract

    During oral reading, the eyes tend to be ahead of the voice (eye-voice span, EVS). It has been hypothesized that the extent to which this happens depends on the automaticity of reading processes, namely on the speed of print-to-sound conversion. We tested whether EVS is affected by another automaticity component – immunity from interference. To that end, we manipulated word familiarity (high-frequency, lowfrequency, and pseudowords, PW) and word length as proxies of immunity from interference, and we used linear mixed effects models to measure the effects of both variables on the time interval at which readers do parallel processing by gazing at word N C 1 while not having articulated word N yet (offset EVS). Parallel processing was enhanced by automaticity, as shown by familiarity length interactions on offset EVS, and it was impeded by lack of automaticity, as shown by the transformation of offset EVS into voice-eye span (voice ahead of the offset of the eyes) in PWs. The relation between parallel processing and automaticity was strengthened by the fact that offset EVS predicted reading velocity. Our findings contribute to understand how the offset EVS, an index that is obtained in oral reading, may tap into different components of automaticity that underlie reading ability, oral or silent. In addition, we compared the duration of the offset EVS with the average reference duration of stages in word production, and we saw that the offset EVS may accommodate for more than the articulatory programming stage of word N.
  • Silva, S., Faísca, L., Araújo, S., Casaca, L., Carvalho, L., Petersson, K. M., & Reis, A. (2016). Too little or too much? Parafoveal preview benefits and parafoveal load costs in dyslexic adults. Annals of Dyslexia, 66(2), 187-201. doi:10.1007/s11881-015-0113-z.

    Abstract

    Two different forms of parafoveal dysfunction have been hypothesized as core deficits of dyslexic individuals: reduced parafoveal preview benefits (“too little parafovea”) and increased costs of parafoveal load (“too much parafovea”). We tested both hypotheses in a single eye-tracking experiment using a modified serial rapid automatized naming (RAN) task. Comparisons between dyslexic and non-dyslexic adults showed reduced parafoveal preview benefits in dyslexics, without increased costs of parafoveal load. Reduced parafoveal preview benefits were observed in a naming task, but not in a silent letter-finding task, indicating that the parafoveal dysfunction may be consequent to the overload with extracting phonological information from orthographic input. Our results suggest that dyslexics’ parafoveal dysfunction is not based on strict visuo-attentional factors, but nevertheless they stress the importance of extra-phonological processing. Furthermore, evidence of reduced parafoveal preview benefits in dyslexia may help understand why serial RAN is an important reading predictor in adulthood
  • Takashima, A., Hulzink, I., Wagensveld, B., & Verhoeven, L. (2016). Emergence of representations through repeated training on pronouncing novel letter combinations leads to efficient reading. Neuropsychologia, 89, 14-30. doi:10.1016/j.neuropsychologia.2016.05.014.

    Abstract

    Printed text can be decoded by utilizing different processing routes depending on the familiarity of the script. A predominant use of word-level decoding strategies can be expected in the case of a familiar script, and an almost exclusive use of letter-level decoding strategies for unfamiliar scripts. Behavioural studies have revealed that frequently occurring words are read more efficiently, suggesting that these words are read in a more holistic way at the word-level, than infrequent and unfamiliar words. To test whether repeated exposure to specific letter combinations leads to holistic reading, we monitored both behavioural and neural responses during novel script decoding and examined changes related to repeated exposure. We trained a group of Dutch university students to decode pseudowords written in an unfamiliar script, i.e., Korean Hangul characters. We compared behavioural and neural responses to pronouncing trained versus untrained two-character pseudowords (equivalent to two-syllable pseudowords). We tested once shortly after the initial training and again after a four days' delay that included another training session. We found that trained pseudowords were pronounced faster and more accurately than novel combinations of radicals (equivalent to letters). Imaging data revealed that pronunciation of trained pseudowords engaged the posterior temporo-parietal region, and engagement of this network was predictive of reading efficiency a month later. The results imply that repeated exposure to specific combinations of graphemes can lead to emergence of holistic representations that result in efficient reading. Furthermore, inter-individual differences revealed that good learners retained efficiency more than bad learners one month later

    Additional information

    mmc1.docx
  • Takashima, A., Van de Ven, F., Kroes, M. C. W., & Fernández, G. (2016). Retrieved emotional context influences hippocampal involvement during recognition of neutral memories. NeuroImage, 143, 280-292. doi:10.1016/j.neuroimage.2016.08.069.

    Abstract

    It is well documented that emotionally arousing experiences are better remembered than mundane events. This is thought to occur through hippocampus-amygdala crosstalk during encoding, consolidation, and retrieval. Here we investigated whether emotional events (context) also cause a memory benefit for simultaneously encoded non-arousing contents and whether this effect persists after a delay via recruitment of a similar hippocampus-amygdala network. Participants studied neutral pictures (content) encoded together with either an arousing or a neutral sound (that served as context) in two study sessions three days apart. Memory was tested in a functional magnetic resonance scanner directly after the second study session. Pictures recognised with high confidence were more often thought to have been associated with an arousing than with a neutral context, irrespective of the veridical source memory. If the retrieved context was arousing, an area in the hippocampus adjacent to the amygdala exhibited heightened activation and this area increased functional connectivity with the parahippocampal gyrus, an area known to process pictures of scenes. These findings suggest that memories can be shaped by the retrieval act. Memory structures may be recruited to a higher degree when an arousing context is retrieved, and this may give rise to confident judgments of recognition for neutral pictures even after a delay
  • Thalmeier, D., Uhlmann, M., Kappen, H. J., & Memmeshiemer, R.-M. (2016). Learning Universal Computations with Spikes. PLoS Computational Biology, 12(6): e1004895. doi:10.1371/journal.pcbi.1004895.

    Abstract

    Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them.
  • Tromp, J., Hagoort, P., & Meyer, A. S. (2016). Pupillometry reveals increased pupil size during indirect request comprehension. Quarterly Journal of Experimental Psychology, 69, 1093-1108. doi:10.1080/17470218.2015.1065282.

    Abstract

    Fluctuations in pupil size have been shown to reflect variations in processing demands during lexical and syntactic processing in language comprehension. An issue that has not received attention is whether pupil size also varies due to pragmatic manipulations. In two pupillometry experiments, we investigated whether pupil diameter was sensitive to increased processing demands as a result of comprehending an indirect request versus a direct statement. Adult participants were presented with 120 picture–sentence combinations that could be interpreted either as an indirect request (a picture of a window with the sentence “it's very hot here”) or as a statement (a picture of a window with the sentence “it's very nice here”). Based on the hypothesis that understanding indirect utterances requires additional inferences to be made on the part of the listener, we predicted a larger pupil diameter for indirect requests than statements. The results of both experiments are consistent with this expectation. We suggest that the increase in pupil size reflects additional processing demands for the comprehension of indirect requests as compared to statements. This research demonstrates the usefulness of pupillometry as a tool for experimental research in pragmatics
  • Van den Hoven, E., Hartung, F., Burke, M., & Willems, R. M. (2016). Individual differences in sensitivity to style during literary reading: Insights from eye-tracking. Collabra, 2(1): 25, pp. 1-16. doi:10.1525/collabra.39.

    Abstract

    Style is an important aspect of literature, and stylistic deviations are sometimes labeled foregrounded, since their manner of expression deviates from the stylistic default. Russian Formalists have claimed that foregrounding increases processing demands and therefore causes slower reading – an effect called retardation. We tested this claim experimentally by having participants read short literary stories while measuring their eye movements. Our results confirm that readers indeed read slower and make more regressions towards foregrounded passages as compared to passages that are not foregrounded. A closer look, however, reveals significant individual differences in sensitivity to foregrounding. Some readers in fact do not slow down at all when reading foregrounded passages. The slowing down effect for literariness was related to a slowing down effect for high perplexity (unexpected) words: those readers who slowed down more during literary passages also slowed down more during high perplexity words, even though no correlation between literariness and perplexity existed in the stories. We conclude that individual differences play a major role in processing of literary texts and argue for accounts of literary reading that focus on the interplay between reader and text.
  • Van den Broek, G., Takashima, A., Wiklund-Hörnqvist, C., Karlsson-Wirebring, C., Segers, E., Verhoeven, L., & Nyberg, L. (2016). Neurocognitive mechanisms of the “testing effect”: A review. Trends in Neuroscience and Education, 5(2), 52-66. doi:10.1016/j.tine.2016.05.001.

    Abstract

    Memory retrieval is an active process that can alter the content and accessibility of stored memories. Of potential relevance for educational practice are findings that memory retrieval fosters better retention than mere studying. This so-called testing effect has been demonstrated for different materials and populations, but there is limited consensus on the neurocognitive mechanisms involved. In this review, we relate cognitive accounts of the testing effect to findings from recent brain-imaging studies to identify neurocognitive factors that could explain the testing effect. Results indicate that testing facilitates later performance through several processes, including effects on semantic memory representations, the selective strengthening of relevant associations and inhibition of irrelevant associations, as well as potentiation of subsequent learning.
  • Van der Ven, F., Takashima, A., Segers, E., Fernández, G., & Verhoeven, L. (2016). Non-symbolic and symbolic notation in simple arithmetic differentially involve intraparietal sulcus and angular gyrus activity. Brain Research, 1643, 91-102.

    Abstract

    Addition problems can be solved by mentally manipulating quantities for which the bilateral intraparietal sulcus (IPS) is likely recruited, or by retrieving the answer directly from fact memory in which the left angular gyrus (AG) and perisylvian areas may play a role. Mental addition is usually studied with problems presented in the Arabic notation (4+2), and less so with number words (four+two) or dots (:: +·.). In the present study, we investigated how the notation of numbers influences processing during simple mental arithmetic. Twenty-five highly educated participants performed simple arithmetic while their brain activity was recorded with functional magnetic resonance imaging. To reveal the effect of number notation, arithmetic problems were presented in a non-symbolic (Dots) or symbolic (Arabic; Words) notation. Furthermore, we asked whether IPS processing during mental arithmetic is magnitude specific or of a more general, visuospatial nature. To this end, we included perception and manipulation of non-magnitude formats (Colors; unfamiliar Japanese Characters). Increased IPS activity was observed, suggesting magnitude calculations during addition of non-symbolic numbers. In contrast, there was greater activity in the AG and perisylvian areas for symbolic compared to non-symbolic addition, suggesting increased verbal fact retrieval. Furthermore, IPS activity was not specific to processing of numerical magnitude but also present for non-magnitude stimuli that required mental visuospatial processing (Color-mixing; Character-memory measured by a delayed match-to-sample task). Together, our data suggest that simple non-symbolic sums are calculated using visual imagery, whereas answers for simple symbolic sums are retrieved from verbal memory.
  • Vanlangendonck, F., Willems, R. M., Menenti, L., & Hagoort, P. (2016). An early influence of common ground during speech planning. Language, Cognition and Neuroscience, 31(6), 741-750. doi:10.1080/23273798.2016.1148747.

    Abstract

    In order to communicate successfully, speakers have to take into account which information they share with their addressee, i.e. common ground. In the current experiment we investigated how and when common ground affects speech planning by tracking speakers’ eye movements while they played a referential communication game. We found evidence that common ground exerts an early, but incomplete effect on speech planning. In addition, we did not find longer planning times when speakers had to take common ground into account, suggesting that taking common ground into account is not necessarily an effortful process. Common ground information thus appears to act as a partial constraint on language production that is integrated flexibly and efficiently in the speech planning process.
  • Weber, K., Christiansen, M., Petersson, K. M., Indefrey, P., & Hagoort, P. (2016). fMRI syntactic and lexical repetition effects reveal the initial stages of learning a new language. The Journal of Neuroscience, 36, 6872-6880. doi:10.1523/JNEUROSCI.3180-15.2016.

    Abstract

    When learning a new language, we build brain networks to process and represent the acquired words and syntax and integrate these with existing language representations. It is an open question whether the same or different neural mechanisms are involved in learning and processing a novel language compared to the native language(s). Here we investigated the neural repetition effects of repeating known and novel word orders while human subjects were in the early stages of learning a new language. Combining a miniature language with a syntactic priming paradigm, we examined the neural correlates of language learning online using functional magnetic resonance imaging (fMRI). In left inferior frontal gyrus (LIFG) and posterior temporal cortex the repetition of novel syntactic structures led to repetition enhancement, while repetition of known structures resulted in repetition suppression. Additional verb repetition led to an
    increase in the syntactic repetition enhancement effect in language-related brain regions. Similarly the repetition of verbs led to repetition enhancement effects in areas related to lexical and semantic processing, an effect that continued to increase in a subset of these regions. Repetition enhancement might reflect a mechanism to build and strengthen a neural network to process novel syntactic structures and lexical items. By contrast, the observed repetition suppression points to overlapping neural mechanisms for native and new language constructions when these have sufficient structural similarities.
  • Weber, K., Luther, L., Indefrey, P., & Hagoort, P. (2016). Overlap and differences in brain networks underlying the processing of complex sentence structures in second language users compared to native speakers. Brain Connectivity, 6(4), 345-355. doi:10.1089/brain.2015.0383.

    Abstract

    When we learn a second language later in life do we integrate it with the established neural networks in place for the first language or is at least a partially new network recruited? While there is evidence that simple grammatical structures in a second language share a system with the native language, the story becomes more multifaceted for complex sentence structures. In this study we investigated the underlying brain networks in native speakers compared to proficient second language users while processing complex sentences. As hypothesized, complex structures were processed by the same large-scale inferior frontal and middle temporal language networks of the brain in the second language, as seen in native speakers. These effects were seen both in activations as well as task-related connectivity patterns. Furthermore, the second language users showed increased task-related connectivity from inferior frontal to inferior parietal regions of the brain, regions related to attention and cognitive control, suggesting less automatic processing for these structures in a second language.
  • Weber, K., Lau, E., Stillerman, B., & Kuperberg, G. (2016). The Yin and the Yang of Prediction: An fMRI Study of Semantic Predictive Processing. PLoS One, 11(3): 0148637. doi:10.1371/journal.pone.0148637.

    Abstract

    Probabilistic prediction plays a crucial role in language comprehension. When predictions are fulfilled, the resulting facilitation allows for fast, efficient processing of ambiguous, rapidly-unfolding input; when predictions are not fulfilled, the resulting error signal allows us to adapt to broader statistical changes in this input. We used functional Magnetic Resonance Imaging to examine the neuroanatomical networks engaged in semantic predictive processing and adaptation. We used a relatedness proportion semantic priming paradigm, in which we manipulated the probability of predictions while holding local semantic context constant. Under conditions of higher (versus lower) predictive validity, we replicate previous observations of reduced activity to semantically predictable words in the left anterior superior/middle temporal cortex, reflecting facilitated processing of targets that are consistent with prior semantic predictions. In addition, under conditions of higher (versus lower) predictive validity we observed significant differences in the effects of semantic relatedness within the left inferior frontal gyrus and the posterior portion of the left superior/middle temporal gyrus. We suggest that together these two regions mediated the suppression of unfulfilled semantic predictions and lexico-semantic processing of unrelated targets that were inconsistent with these predictions. Moreover, under conditions of higher (versus lower) predictive validity, a functional connectivity analysis showed that the left inferior frontal and left posterior superior/middle temporal gyrus were more tightly interconnected with one another, as well as with the left anterior cingulate cortex. The left anterior cingulate cortex was, in turn, more tightly connected to superior lateral frontal cortices and subcortical regions—a network that mediates rapid learning and adaptation and that may have played a role in switching to a more predictive mode of processing in response to the statistical structure of the wider environmental context. Together, these findings highlight close links between the networks mediating semantic prediction, executive function and learning, giving new insights into how our brains are able to flexibly adapt to our environment.

    Additional information

    Data availability
  • Willems, R. M., & Jacobs, A. M. (2016). Caring about Dostoyevsky: The untapped potential of studying literature. Trends in Cognitive Sciences, 20(4), 243-245. doi:10.1016/j.tics.2015.12.009.

    Abstract

    Should cognitive scientists and neuroscientists care about Dostoyevsky? Engaging with fiction is a natural and rich behavior, providing a unique window onto the mind and brain, particularly for mental simulation, emotion, empathy, and immersion. With advances in analysis techniques, it is time that cognitive scientists and neuroscientists embrace literature and fiction.
  • Willems, R. M., Frank, S. L., Nijhoff, A. D., Hagoort, P., & Van den Bosch, A. (2016). Prediction during natural language comprehension. Cerebral Cortex, 26(6), 2506-2516. doi:10.1093/cercor/bhv075.

    Abstract

    The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investigated the neural basis of 2 distinct aspects of word prediction, derived from information theory, during story comprehension. We assessed the effect of entropy of next-word probability distributions as well as surprisal. A computational model determined entropy and surprisal for each word in 3 literary stories. Twenty-four healthy participants listened to the same 3 stories while their brain activation was measured using fMRI. Reversed speech fragments were presented as a control condition. Brain areas sensitive to entropy were left ventral premotor cortex, left middle frontal gyrus, right inferior frontal gyrus, left inferior parietal lobule, and left supplementary motor area. Areas sensitive to surprisal were left inferior temporal sulcus (“visual word form area”), bilateral superior temporal gyrus, right amygdala, bilateral anterior temporal poles, and right inferior frontal sulcus. We conclude that prediction during language comprehension can occur at several levels of processing, including at the level of word form. Our study exemplifies the power of combining computational linguistics with cognitive neuroscience, and additionally underlines the feasibility of studying continuous spoken language materials with fMRI.

    Additional information

    Supplementary Material
  • Zimmermann, M., Verhagen, L., De Lange, F., & Toni, I. (2016). The extrastriate body area computes desired goal states during action planning. eNeuro, 3(2): ENEURO.0020-16.2016. doi:10.1523/ENEURO.0020-16.2016.

    Abstract

    How do object perception and action interact at a neural level? Here we test the hypothesis that perceptual
    features, processed by the ventral visuoperceptual stream, are used as priors by the dorsal visuomotor stream to
    specify goal-directed grasping actions. We present three main findings, which were obtained by combining
    time-resolved transcranial magnetic stimulation and kinematic tracking of grasp-and-rotate object manipulations,
    in a group of healthy human participants (N 22). First, the extrastriate body area (EBA), in the ventral stream,
    provides an initial structure to motor plans, based on current and desired states of a grasped object and of the
    grasping hand. Second, the contributions of EBA are earlier in time than those of a caudal intraparietal region
    known to specify the action plan. Third, the contributions of EBA are particularly important when desired and
    current object configurations differ, and multiple courses of actions are possible. These findings specify the
    temporal and functional characteristics for a mechanism that integrates perceptual processing with motor
    planning.
  • Allen, S., Ozyurek, A., Kita, S., Brown, A., Furman, R., Ishizuka, T., & Fujii, M. (2007). Language-specific and universal influences in children's syntactic packaging of manner and path: A comparison of English, Japanese, and Turkish. Cognition, 102, 16-48. doi:10.1016/j.cognition.2005.12.006.

    Abstract

    Different languages map semantic elements of spatial relations onto different lexical and syntactic units. These crosslinguistic differences raise important questions for language development in terms of how this variation is learned by children. We investigated how Turkish-, English-, and Japanese-speaking children (mean age 3;8) package the semantic elements of Manner and Path onto syntactic units when both the Manner and the Path of the moving Figure occur simultaneously and are salient in the event depicted. Both universal and language-specific patterns were evident in our data. Children used the semantic-syntactic mappings preferred by adult speakers of their own languages, and even expressed subtle syntactic differences that encode different relations between Manner and Path in the same way as their adult counterparts (i.e., Manner causing vs. incidental to Path). However, not all types of semantics-syntax mappings were easy for children to learn (e.g., expressing Manner and Path elements in two verbal clauses). In such cases, Turkish- and Japanese-speaking children frequently used syntactic patterns that were not typical in the target language but were similar to patterns used by English-speaking children, suggesting some universal influence. Thus, both language-specific and universal tendencies guide the development of complex spatial expressions.
  • Bramão, I., Mendonça, A., Faísca, L., Ingvar, M., Petersson, K. M., & Reis, A. (2007). The impact of reading and writing skills on a visuo-motor integration task: A comparison between illiterate and literate subjects. Journal of the International Neuropsychological Society, 13(2), 359-364. doi:10.1017/S1355617707070440.

    Abstract

    Previous studies have shown a significant association between reading skills and the performance on visuo-motor tasks. In order to clarify whether reading and writing skills modulate non-linguistic domains, we investigated the performance of two literacy groups on a visuo-motor integration task with non-linguistic stimuli. Twenty-one illiterate participants and twenty matched literate controls were included in the experiment. Subjects were instructed to use the right or the left index finger to point to and touch a randomly presented target on the right or left side of a touch screen. The results showed that the literate subjects were significantly faster in detecting and touching targets on the left compared to the right side of the screen. In contrast, the presentation side did not affect the performance of the illiterate group. These results lend support to the idea that having acquired reading and writing skills, and thus a preferred left-to-right reading direction, influences visual scanning. (JINS, 2007, 13, 359–364
  • Furman, R., & Ozyurek, A. (2007). Development of interactional discourse markers: Insights from Turkish children's and adults' narratives. Journal of Pragmatics, 39(10), 1742-1757. doi:10.1016/j.pragma.2007.01.008.

    Abstract

    Discourse markers (DMs) are linguistic elements that index different relations and coherence between units of talk (Schiffrin, Deborah, 1987. Discourse Markers. Cambridge University Press, Cambridge). Most research on the development of these forms has focused on conversations rather than narratives and furthermore has not directly compared children's use of DMs to adult usage. This study examines the development of three DMs (şey ‘uuhh’, yani ‘I mean’, işte ‘y’know’) that mark interactional levels of discourse in oral Turkish narratives in 60 Turkish children (3-, 5- and 9-year-olds) and 20 Turkish-speaking adults. The results show that the frequency and functions of DMs change with age. Children learn şey, which mainly marks exchange level structures, earliest. However, yani and işte have multi-functions such as marking both information states and participation frameworks and are consequently learned later. Children also use DMs with different functions than adults. Overall, the results show that learning to use interactional DMs in narratives is complex and goes beyond age 9, especially for multi-functional DMs that index an interplay of discourse coherence at different levels.
  • Gisselgard, J., Uddén, J., Ingvar, M., & Petersson, K. M. (2007). Disruption of order information by irrelevant items: A serial recognition paradigm. Acta Psychologica, 124(3), 356-369. doi:10.1016/j.actpsy.2006.04.002.

    Abstract

    Irrelevant speech effect (ISE) is defined as a decrement in visually presented digit-list short-term memory performance due to exposure to irrelevant auditory material. Perhaps the most successful theoretical explanation of the effect is the changing state hypothesis. This hypothesis explains the effect in terms of confusion between amodal serial order cues, and represents a view based on the interference caused by the processing of similar order information of the visual and auditory materials. An alternative view suggests that the interference occurs as a consequence of the similarity between the visual and auditory contents of the stimuli. An important argument for the former view is the observation that ISE is almost exclusively observed in tasks that require memory for serial order. However, most short-term memory tasks require that both item and order information be retained in memory. An ideal task to investigate the sensitivity of maintenance of serial order to irrelevant speech would be one that calls upon order information but not item information. One task that is particularly suited to address this issue is serial recognition. In a typical serial recognition task, a list of items is presented and then probed by the same list in which the order of two adjacent items has been transposed. Due to the re-presentation of the encoding string, serial recognition requires primarily the serial order to be maintained while the content of the presented items is deemphasized. In demonstrating a highly significant ISE of changing versus steady-state auditory items in a serial recognition task, the present finding lends support for and extends previous empirical findings suggesting that irrelevant speech has the potential to interfere with the coding of the order of the items to be memorized.
  • Hagoort, P., & Van Berkum, J. J. A. (2007). Beyond the sentence given. Philosophical Transactions of the Royal Society. Series B: Biological Sciences, 362, 801-811.

    Abstract

    A central and influential idea among researchers of language is that our language faculty is organized according to Fregean compositionality, which states that the meaning of an utterance is a function of the meaning of its parts and of the syntactic rules by which these parts are combined. Since the domain of syntactic rules is the sentence, the implication of this idea is that language interpretation takes place in a two-step fashion. First, the meaning of a sentence is computed. In a second step, the sentence meaning is integrated with information from prior discourse, world knowledge, information about the speaker and semantic information from extra-linguistic domains such as co-speech gestures or the visual world. Here, we present results from recordings of event-related brain potentials that are inconsistent with this classical two-step model of language interpretation. Our data support a one-step model in which knowledge about the context and the world, concomitant information from other modalities, and the speaker are brought to bear immediately, by the same fast-acting brain system that combines the meanings of individual words into a message-level representation. Underlying the one-step model is the immediacy assumption, according to which all available information will immediately be used to co-determine the interpretation of the speaker's message. Functional magnetic resonance imaging data that we collected indicate that Broca's area plays an important role in semantic unification. Language comprehension involves the rapid incorporation of information in a 'single unification space', coming from a broader range of cognitive domains than presupposed in the standard two-step model of interpretation.
  • Hald, L. A., Steenbeek-Planting, E. G., & Hagoort, P. (2007). The interaction of discourse context and world knowledge in online sentence comprehension: Evidence from the N400. Brain Research, 1146, 210-218. doi:10.1016/j.brainres.2007.02.054.

    Abstract

    In an ERP experiment we investigated how the recruitment and integration of world knowledge information relate to the integration of information within a current discourse context. Participants were presented with short discourse contexts which were followed by a sentence that contained a critical word that was correct or incorrect based on general world knowledge and the supporting discourse context, or was more or less acceptable based on the combination of general world knowledge and the specific local discourse context. Relative to the critical word in the correct world knowledge sentences following a neutral discourse, all other critical words elicited an N400 effect that began at about 300 ms after word onset. However, the magnitude of the N400 effect varied in a way that suggests an interaction between world knowledge and discourse context. The results indicate that both world knowledge and discourse context have an effect on sentence interpretation, but neither overrides the other.
  • Janzen, G., Wagensveld, B., & Van Turennout, M. (2007). Neural representation of navigational relevance is rapidly induced and long lasting. Cerebral Cortex, 17(4), 975-981. doi:10.1093/cercor/bhl008.

    Abstract

    Successful navigation is facilitated by the presence of landmarks. Previous functional magnetic resonance imaging (fMRI) evidence indicated that the human parahippocampal gyrus automatically distinguishes between landmarks placed at navigationally relevant (decision points) and irrelevant locations (nondecision points). This storage of navigational relevance can provide a neural mechanism underlying successful navigation. However, an efficient wayfinding mechanism requires that important spatial information is learned quickly and maintained over time. The present study investigates whether the representation of navigational relevance is modulated by time and practice. Participants learned 2 film sequences through virtual mazes containing objects at decision and at nondecision points. One maze was shown one time, and the other maze was shown 3 times. Twenty-four hours after study, event-related fMRI data were acquired during recognition of the objects. The results showed that activity in the parahippocampal gyrus was increased for objects previously placed at decision points as compared with objects placed at nondecision points. The decision point effect was not modulated by the number of exposures to the mazes and independent of explicit memory functions. These findings suggest a persistent representation of navigationally relevant information, which is stable after only one exposure to an environment. These rapidly induced and long-lasting changes in object representation provide a basis for successful wayfinding.
  • Janzen, G., & Weststeijn, C. G. (2007). Neural representation of object location and route direction: An event-related fMRI study. Brain Research, 1165, 116-125. doi:10.1016/j.brainres.2007.05.074.

    Abstract

    The human brain distinguishes between landmarks placed at navigationally relevant and irrelevant locations. However, to provide a successful wayfinding mechanism not only landmarks but also the routes between them need to be stored. We examined the neural representation of a memory for route direction and a memory for relevant landmarks. Healthy human adults viewed objects along a route through a virtual maze. Event-related functional magnetic resonance imaging (fMRI) data were acquired during a subsequent subliminal priming recognition task. Prime-objects either preceded or succeeded a target-object on a preciously learned route. Our results provide evidence that the parahippocampal gyri distinguish between relevant and irrelevant landmarks whereas the inferior parietal gyrus, the anterior cingulate gyrus as well as the right caudate nucleus are involved in the coding of route direction. These data show that separated memory systems store different spatial information. A memory for navigationally relevant object information and a memory for route direction exist.
  • Kita, S., Ozyurek, A., Allen, S., Brown, A., Furman, R., & Ishizuka, T. (2007). Relations between syntactic encoding and co-speech gestures: Implications for a model of speech and gesture production. Language and Cognitive Processes, 22(8), 1212-1236. doi:10.1080/01690960701461426.

    Abstract

    Gestures that accompany speech are known to be tightly coupled with speech production. However little is known about the cognitive processes that underlie this link. Previous cross-linguistic research has provided preliminary evidence for online interaction between the two systems based on the systematic co-variation found between how different languages syntactically package Manner and Path information of a motion event and how gestures represent Manner and Path. Here we elaborate on this finding by testing whether speakers within the same language gesturally express Manner and Path differently according to their online choice of syntactic packaging of Manner and Path, or whether gestural expression is pre-determined by a habitual conceptual schema congruent with the linguistic typology. Typologically congruent and incongruent syntactic structures for expressing Manner and Path (i.e., in a single clause or multiple clauses) were elicited from English speakers. We found that gestural expressions were determined by the online choice of syntactic packaging rather than by a habitual conceptual schema. It is therefore concluded that speech and gesture production processes interface online at the conceptual planning phase. Implications of the findings for models of speech and gesture production are discussed
  • Marklund, P., Fransson, P., Cabeza, R., Petersson, K. M., Ingvar, M., & Nyberg, L. (2007). Sustained and transient neural modulations in prefrontal cortex related to declarative long-term memory, working memory, and attention. Cortex, 43(1), 22-37. doi:10.1016/S0010-9452(08)70443-X.

    Abstract

    Common activations in prefrontal cortex (PFC) during episodic and semantic long-term memory (LTM) tasks have been hypothesized to reflect functional overlap in terms of working memory (WM) and cognitive control. To evaluate a WM account of LTM-general activations, the present study took into consideration that cognitive task performance depends on the dynamic operation of multiple component processes, some of which are stimulus-synchronous and transient in nature; and some that are engaged throughout a task in a sustained fashion. PFC and WM may be implicated in both of these temporally independent components. To elucidate these possibilities we employed mixed blocked/event-related functional magnetic resonance imaging (fMRI) procedures to assess the extent to which sustained or transient activation patterns overlapped across tasks indexing episodic and semantic LTM, attention (ATT), and WM. Within PFC, ventrolateral and medial areas exhibited sustained activity across all tasks, whereas more anterior regions including right frontopolar cortex were commonly engaged in sustained processing during the three memory tasks. These findings do not support a WM account of sustained frontal responses during LTM tasks, but instead suggest that the pattern that was common to all tasks reflects general attentional set/vigilance, and that the shared WM-LTM pattern mediates control processes related to upholding task set. Transient responses during the three memory tasks were assessed relative to ATT to isolate item-specific mnemonic processes and were found to be largely distinct from sustained effects. Task-specific effects were observed for each memory task. In addition, a common item response for all memory tasks involved left dorsolateral PFC (DLPFC). The latter response might be seen as reflecting WM processes during LTM retrieval. Thus, our findings suggest that a WM account of shared PFC recruitment in LTM tasks holds for common transient item-related responses rather than sustained state-related responses that are better seen as reflecting more general attentional/control processes.
  • Menenti, L., & Burani, C. (2007). What causes the effect of age of acquisition in lexical processing? Quarterly Journal of Experimental Psychology, 60(5), 652-660. doi:10.1080/17470210601100126.

    Abstract

    Three hypotheses for effects of age of acquisition (AoA) in lexical processing are compared: the cumulative frequency hypothesis (frequency and AoA both influence the number of encounters with a word, which influences processing speed), the semantic hypothesis (early-acquired words are processed faster because they are more central in the semantic network), and the neural network model (early-acquired words are faster because they are acquired when a network has maximum plasticity). In a regression study of lexical decision (LD) and semantic categorization (SC) in Italian and Dutch, contrary to the cumulative frequency hypothesis, AoA coefficients were larger than frequency coefficients, and, contrary to the semantic hypothesis, the effect of AoA was not larger in SC than in LD. The neural network model was supported.
  • Nieuwland, M. S., Petersson, K. M., & Van Berkum, J. J. A. (2007). On sense and reference: Examining the functional neuroanatomy of referential processing. NeuroImage, 37(3), 993-1004. doi:10.1016/j.neuroimage.2007.05.048.

    Abstract

    In an event-related fMRI study, we examined the cortical networks involved in establishing reference during language comprehension. We compared BOLD responses to sentences containing referentially ambiguous pronouns (e.g., “Ronald told Frank that he…”), referentially failing pronouns (e.g., “Rose told Emily that he…”) or coherent pronouns. Referential ambiguity selectively recruited medial prefrontal regions, suggesting that readers engaged in problem-solving to select a unique referent from the discourse model. Referential failure elicited activation increases in brain regions associated with morpho-syntactic processing, and, for those readers who took failing pronouns to refer to unmentioned entities, additional regions associated with elaborative inferencing were observed. The networks activated by these two referential problems did not overlap with the network activated by a standard semantic anomaly. Instead, we observed a double dissociation, in that the systems activated by semantic anomaly are deactivated by referential ambiguity, and vice versa. This inverse coupling may reflect the dynamic recruitment of semantic and episodic processing to resolve semantically or referentially problematic situations. More generally, our findings suggest that neurocognitive accounts of language comprehension need to address not just how we parse a sentence and combine individual word meanings, but also how we determine who's who and what's what during language comprehension.
  • Nieuwland, M. S., Otten, M., & Van Berkum, J. J. A. (2007). Who are you talking about? Tracking discourse-level referential processing with event-related brain potentials. Journal of Cognitive Neuroscience, 19(2), 228-236. doi:10.1162/jocn.2007.19.2.228.

    Abstract

    In this event-related brain potentials (ERPs) study, we explored the possibility to selectively track referential ambiguity during spoken discourse comprehension. Earlier ERP research has shown that referentially ambiguous nouns (e.g., “the girl” in a two-girl context) elicit a frontal, sustained negative shift relative to unambiguous control words. In the current study, we examined whether this ERP effect reflects “deep” situation model ambiguity or “superficial” textbase ambiguity. We contrasted these different interpretations by investigating whether a discourse-level semantic manipulation that prevents referential ambiguity also averts the elicitation of a referentially induced ERP effect. We compared ERPs elicited by nouns that were referentially nonambiguous but were associated with two discourse entities (e.g., “the girl” with two girls introduced in the context, but one of which has died or left the scene), with referentially ambiguous and nonambiguous control words. Although temporally referentially ambiguous nouns elicited a frontal negative shift compared to control words, the “double bound” but referentially nonambiguous nouns did not. These results suggest that it is possible to selectively track referential ambiguity with ERPs at the level that is most relevant to discourse comprehension, the situation model.
  • Otten, M., & Van Berkum, J. J. A. (2007). What makes a discourse constraining? Comparing the effects of discourse message and scenario fit on the discourse-dependent N400 effect. Brain Research, 1153, 166-177. doi:10.1016/j.brainres.2007.03.058.

    Abstract

    A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more `loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.

Share this page