Publications

Displaying 1 - 100 of 238
  • Aldosimani, M., Verdonschot, R. G., Iwamoto, Y., Nakazawa, M., Mallya, S. M., Kakimoto, N., Toyosawa, S., Kreiborg, S., & Murakami, S. (2022). Prognostic factors for lymph node metastasis from upper gingival carcinomas. Oral Radiology, 38(3), 389-396. doi:10.1007/s11282-021-00568-w.

    Abstract

    This study sought to identify tumor characteristics that associate with regional lymph node metastases in squamous cell carcinomas originating in the upper gingiva.
  • Arana, S. (2022). Abstract neural representations of language during sentence comprehension: Evidence from MEG and Behaviour. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bauer, B. L. M. (2022). Counting systems. In A. Ledgeway, & M. Maiden (Eds.), The Cambridge Handbook of Romance Linguistics (pp. 459-488). Cambridge: Cambridge University Press.

    Abstract

    The Romance counting system is numerical – with residues of earlier systems whereby each commodity had its own unit of quantification – and decimal. Numeral formations beyond ‘10’ are compounds, combining two or more numerals that are in an arithmetical relation, typically that of addition and multiplication. Formal variation across the (standard) Romance languages and dialects and across historical stages involves the relative sequence of the composing elements, absence or presence of connectors, their synthetic vs. analytic nature, and the degree of grammatical marking. A number of ‘deviant’ numeral formations raise the question of borrowing vs independent development, such as vigesimals (featuring a base ‘20’ instead ‘10’) in certain Romance varieties and the teen and decad formations in Romanian. The other types of numeral in Romance, which derive from the unmarked and consistent cardinals, feature a significantly higher degree of formal complexity and variation involving Latin formants and tend toward analyticity. While Latin features prominently in the Romance counting system as a source of numeral formations and suffixes, it is only in Romance that the inherited decimal system reached its full potential, illustrating its increasing prominence, reflected not only in numerals, but also in language acquisition, sign language, and post-Revolution measuring systems.
  • Bauer, B. L. M. (2022). Finite verb + infinite + object in later Latin: Early brace constructions? In G. V. M. Haverling (Ed.), Studies on Late and Vulgar Latin in the Early 21st Century: Acts of the 12th International Colloquium "Latin vulgaire – Latin tardif (pp. 166-181). Uppsala: Acta Universitatis Upsaliensis.
  • Bocanegra, B. R., Poletiek, F. H., & Zwaan, R. A. (2022). Language concatenates perceptual features into representations during comprehension. Journal of Memory and Language, 127: 104355. doi:10.1016/j.jml.2022.104355.

    Abstract

    Although many studies have investigated the activation of perceptual representations during language comprehension, to our knowledge only one previous study has directly tested how perceptual features are combined into representations during comprehension. In their classic study, Potter and Faulconer [(1979). Understanding noun phrases. Journal of Verbal Learning and Verbal Behavior, 18, 509–521.] investigated the perceptual representation of adjective-noun combinations. However, their non-orthogonal design did not allow the differentiation between conjunctive vs. disjunctive representations. Using randomized orthogonal designs, we observe evidence for disjunctive perceptual representations when participants represent feature combinations simultaneously (in several experiments; N = 469), and we observe evidence for conjunctive perceptual representations when participants represent feature combinations sequentially (In several experiments; N = 628). Our findings show that the generation of conjunctive representations during comprehension depends on the concatenation of linguistic cues, and thus suggest the construction of elaborate perceptual representations may critically depend on language.
  • Carota, F., Schoffelen, J.-M., Oostenveld, R., & Indefrey, P. (2022). The time course of language production as revealed by pattern classification of MEG sensor data. The Journal of Neuroscience, 42(29), 5745-5754. doi:10.1523/JNEUROSCI.1923-21.2022.

    Abstract

    Language production involves a complex set of computations, from conceptualization to articulation, which are thought to engage cascading neural events in the language network. However, recent neuromagnetic evidence suggests simultaneous meaning-to-speech mapping in picture naming tasks, as indexed by early parallel activation of frontotemporal regions to lexical semantic, phonological, and articulatory information. Here we investigate the time course of word production, asking to what extent such “earliness” is a distinctive property of the associated spatiotemporal dynamics. Using MEG, we recorded the neural signals of 34 human subjects (26 males) overtly naming 134 images from four semantic object categories (animals, foods, tools, clothes). Within each category, we covaried word length, as quantified by the number of syllables contained in a word, and phonological neighborhood density to target lexical and post-lexical phonological/phonetic processes. Multivariate pattern analyses searchlights in sensor space distinguished the stimulus-locked spatiotemporal responses to object categories early on, from 150 to 250 ms after picture onset, whereas word length was decoded in left frontotemporal sensors at 250-350 ms, followed by the latency of phonological neighborhood density (350-450 ms). Our results suggest a progression of neural activity from posterior to anterior language regions for the semantic and phonological/phonetic computations preparing overt speech, thus supporting serial cascading models of word production
  • Carter, G., & Nieuwland, M. S. (2022). Predicting definite and indefinite referents during discourse comprehension: Evidence from event‐related potentials. Cognitive Science, 46(2): e13092. doi:10.1111/cogs.13092.

    Abstract

    Linguistic predictions may be generated from and evaluated against a representation of events and referents described in the discourse. Compatible with this idea, recent work shows that predictions about novel noun phrases include their definiteness. In the current follow-up study, we ask whether people engage similar prediction-related processes for definite and indefinite referents. This question is relevant for linguistic theories that imply a processing difference between definite and indefinite noun phrases, typically because definiteness is thought to require a uniquely identifiable referent in the discourse. We addressed this question in an event-related potential (ERP) study (N = 48) with preregistration of data acquisition, preprocessing, and Bayesian analysis. Participants read Dutch mini-stories with a definite or indefinite novel noun phrase (e.g., “het/een huis,” the/a house), wherein (in)definiteness of the article was either expected or unexpected and the noun was always strongly expected. Unexpected articles elicited enhanced N400s, but unexpectedly indefinite articles also elicited a positive ERP effect at frontal channels compared to expectedly indefinite articles. We tentatively link this effect to an antiuniqueness violation, which may force people to introduce a new referent over and above the already anticipated one. Interestingly, expectedly definite nouns elicited larger N400s than unexpectedly definite nouns (replicating a previous surprising finding) and indefinite nouns. Although the exact nature of these noun effects remains unknown, expectedly definite nouns may have triggered the strongest semantic activation because they alone refer to specific and concrete referents. In sum, results from both the articles and nouns clearly demonstrate that definiteness marking has a rapid effect on processing, counter to recent claims regarding definiteness processing.
  • Coopmans, C. W., De Hoop, H., Kaushik, K., Hagoort, P., & Martin, A. E. (2022). Hierarchy in language interpretation: Evidence from behavioural experiments and computational modelling. Language, Cognition and Neuroscience, 37(4), 420-439. doi:10.1080/23273798.2021.1980595.

    Abstract

    It has long been recognised that phrases and sentences are organised hierarchically, but many computational models of language treat them as sequences of words without computing constituent structure. Against this background, we conducted two experiments which showed that participants interpret ambiguous noun phrases, such as second blue ball, in terms of their abstract hierarchical structure rather than their linear surface order. When a neural network model was tested on this task, it could simulate such “hierarchical” behaviour. However, when we changed the training data such that they were not entirely unambiguous anymore, the model stopped generalising in a human-like way. It did not systematically generalise to novel items, and when it was trained on ambiguous trials, it strongly favoured the linear interpretation. We argue that these models should be endowed with a bias to make generalisations over hierarchical structure in order to be cognitively adequate models of human language.
  • Coopmans, C. W., De Hoop, H., Hagoort, P., & Martin, A. E. (2022). Effects of structure and meaning on cortical tracking of linguistic units in naturalistic speech. Neurobiology of Language, 3(3), 386-412. doi:10.1162/nol_a_00070.

    Abstract

    Recent research has established that cortical activity “tracks” the presentation rate of syntactic phrases in continuous speech, even though phrases are abstract units that do not have direct correlates in the acoustic signal. We investigated whether cortical tracking of phrase structures is modulated by the extent to which these structures compositionally determine meaning. To this end, we recorded electroencephalography (EEG) of 38 native speakers who listened to naturally spoken Dutch stimuli in different conditions, which parametrically modulated the degree to which syntactic structure and lexical semantics determine sentence meaning. Tracking was quantified through mutual information between the EEG data and either the speech envelopes or abstract annotations of syntax, all of which were filtered in the frequency band corresponding to the presentation rate of phrases (1.1–2.1 Hz). Overall, these mutual information analyses showed stronger tracking of phrases in regular sentences than in stimuli whose lexical-syntactic content is reduced, but no consistent differences in tracking between sentences and stimuli that contain a combination of syntactic structure and lexical content. While there were no effects of compositional meaning on the degree of phrase-structure tracking, analyses of event-related potentials elicited by sentence-final words did reveal meaning-induced differences between conditions. Our findings suggest that cortical tracking of structure in sentences indexes the internal generation of this structure, a process that is modulated by the properties of its input, but not by the compositional interpretation of its output.

    Additional information

    supplementary information
  • Coopmans, C. W., & Cohn, N. (2022). An electrophysiological investigation of co-referential processes in visual narrative comprehension. Neuropsychologia, 172: 108253. doi:10.1016/j.neuropsychologia.2022.108253.

    Abstract

    Visual narratives make use of various means to convey referential and co-referential meaning, so comprehenders
    must recognize that different depictions across sequential images represent the same character(s). In this study,
    we investigated how the order in which different types of panels in visual sequences are presented affects how
    the unfolding narrative is comprehended. Participants viewed short comic strips while their electroencephalo-
    gram (EEG) was recorded. We analyzed evoked and induced EEG activity elicited by both full panels (showing a
    full character) and refiner panels (showing only a zoom of that full panel), and took into account whether they
    preceded or followed the panel to which they were co-referentially related (i.e., were cataphoric or anaphoric).
    We found that full panels elicited both larger N300 amplitude and increased gamma-band power compared to
    refiner panels. Anaphoric panels elicited a sustained negativity compared to cataphoric panels, which appeared
    to be sensitive to the referential status of the anaphoric panel. In the time-frequency domain, anaphoric panels
    elicited reduced 8–12 Hz alpha power and increased 45–65 Hz gamma-band power compared to cataphoric
    panels. These findings are consistent with models in which the processes involved in visual narrative compre-
    hension partially overlap with those in language comprehension.
  • Corps, R. E., Knudsen, B., & Meyer, A. S. (2022). Overrated gaps: Inter-speaker gaps provide limited information about the timing of turns in conversation. Cognition, 223: 105037. doi:10.1016/j.cognition.2022.105037.

    Abstract

    Corpus analyses have shown that turn-taking in conversation is much faster than laboratory studies of speech planning would predict. To explain fast turn-taking, Levinson and Torreira (2015) proposed that speakers are highly proactive: They begin to plan a response to their interlocutor's turn as soon as they have understood its gist, and launch this planned response when the turn-end is imminent. Thus, fast turn-taking is possible because speakers use the time while their partner is talking to plan their own utterance. In the present study, we asked how much time upcoming speakers actually have to plan their utterances. Following earlier psycholinguistic work, we used transcripts of spoken conversations in Dutch, German, and English. These transcripts consisted of segments, which are continuous stretches of speech by one speaker. In the psycholinguistic and phonetic literature, such segments have often been used as proxies for turns. We found that in all three corpora, large proportions of the segments comprised of only one or two words, which on our estimate does not give the next speaker enough time to fully plan a response. Further analyses showed that speakers indeed often did not respond to the immediately preceding segment of their partner, but continued an earlier segment of their own. More generally, our findings suggest that speech segments derived from transcribed corpora do not necessarily correspond to turns, and the gaps between speech segments therefore only provide limited information about the planning and timing of turns.
  • Dai, B., McQueen, J. M., Terporten, R., Hagoort, P., & Kösem, A. (2022). Distracting Linguistic Information Impairs Neural Tracking of Attended Speech. Current Research in Neurobiology, 3: 100043. doi:10.1016/j.crneur.2022.100043.

    Abstract

    Listening to speech is difficult in noisy environments, and is even harder when the interfering noise consists of intelligible speech as compared to unintelligible sounds. This suggests that the competing linguistic information interferes with the neural processing of target speech. Interference could either arise from a degradation of the neural representation of the target speech, or from increased representation of distracting speech that enters in competition with the target speech. We tested these alternative hypotheses using magnetoencephalography (MEG) while participants listened to a target clear speech in the presence of distracting noise-vocoded speech. Crucially, the distractors were initially unintelligible but became more intelligible after a short training session. Results showed that the comprehension of the target speech was poorer after training than before training. The neural tracking of target speech in the delta range (1–4 Hz) reduced in strength in the presence of a more intelligible distractor. In contrast, the neural tracking of distracting signals was not significantly modulated by intelligibility. These results suggest that the presence of distracting speech signals degrades the linguistic representation of target speech carried by delta oscillations.
  • Dijkstra, T., Peeters, D., Hieselaar, W., & van Geffen, A. (2022). Orthographic and semantic priming effects in neighbour cognates: Experiments and simulations. Bilingualism: Language and Cognition, 26(2), 371-383. doi:10.1017/S1366728922000591.

    Abstract

    To investigate how orthography and semantics interact during bilingual visual word recognition, Dutch–English bilinguals made lexical decisions in two masked priming experiments. Dutch primes and English targets were presented that were either neighbour cognates (boek – BOOK), noncognate translations (kooi – CAGE), orthographically related neighbours (neus – NEWS), or unrelated words (huid - COAT). Prime durations of 50 ms (Experiment 1) and 83 ms (Experiment 2) led to similar result patterns. Both experiments reported a large cognate facilitation effect, a smaller facilitatory noncognate translation effect, and the absence of inhibitory orthographic neighbour effects. These results indicate that cognate facilitation is in large part due to orthographic-semantic resonance. Priming results for each condition were simulated well (all r's >.50) by Multilink+, a recent computational model for word retrieval. Limitations to the role of lateral inhibition in bilingual word recognition are discussed.
  • Drijvers, L., & Holler, J. (2022). Face-to-face spatial orientation fine-tunes the brain for neurocognitive processing in conversation. iScience, 25(11): 105413. doi:10.1016/j.isci.2022.105413.

    Abstract

    We here demonstrate that face-to-face spatial orientation induces a special ‘social mode’ for neurocognitive processing during conversation, even in the absence of visibility. Participants conversed face-to-face, face-to-face but visually occluded, and back-to-back to tease apart effects caused by seeing visual communicative signals and by spatial orientation. Using dual-EEG, we found that 1) listeners’ brains engaged more strongly while conversing in face-to-face than back-to-back, irrespective of the visibility of communicative signals, 2) listeners attended to speech more strongly in a back-to-back compared to a face-to-face spatial orientation without visibility; visual signals further reduced the attention needed; 3) the brains of interlocutors were more in sync in a face-to-face compared to a back-to-back spatial orientation, even when they could not see each other; visual signals further enhanced this pattern. Communicating in face-to-face spatial orientation is thus sufficient to induce a special ‘social mode’ which fine-tunes the brain for neurocognitive processing in conversation.
  • Eekhof, L. S., Van Krieken, K., & Willems, R. M. (2022). Reading about minds: The social-cognitive potential of narratives. Psychonomic Bulletin & Review, 29, 1703-1718. doi:10.3758/s13423-022-02079-z.

    Abstract

    It is often argued that narratives improve social cognition, either by appealing to social-cognitive abilities as we engage with the story world and its characters, or by conveying social knowledge. Empirical studies have found support for both a correlational and a causal link between exposure to (literary, fictional) narratives and social cognition. However, a series of failed replications has cast doubt on the robustness of these claims. Here, we review the existing empirical literature and identify open questions and challenges. An important conclusion of the review is that previous research has given too little consideration to the diversity of narratives, readers, and social-cognitive processes involved in the social-cognitive potential of narratives. We therefore establish a research agenda, proposing that future research should focus on (1) the specific text characteristics that drive the social-cognitive potential of narratives, (2) the individual differences between readers with respect to their sensitivity to this potential, and (3) the various aspects of social cognition that are potentially affected by reading narratives. Our recommendations can guide the design of future studies that will help us understand how, for whom, and in what respect exposure to narratives can advantage social cognition.
  • Ferrari, A., Richter, D., & De Lange, F. (2022). Updating contextual sensory expectations for adaptive behaviour. The Journal of Neuroscience, 42(47), 8855-8869. doi:10.1523/JNEUROSCI.1107-22.2022.

    Abstract

    The brain has the extraordinary capacity to construct predictive models of the environment by internalizing statistical regularities in the sensory inputs. The resulting sensory expectations shape how we perceive and react to the world; at the neural level, this relates to decreased neural responses to expected than unexpected stimuli (‘expectation suppression’). Crucially, expectations may need revision as context changes. However, existing research has often neglected this issue. Further, it is unclear whether contextual revisions apply selectively to expectations relevant to the task at hand, hence serving adaptive behaviour. The present fMRI study examined how contextual visual expectations spread throughout the cortical hierarchy as participants update their beliefs. We created a volatile environment with two state spaces presented over separate contexts and controlled by an independent contextualizing signal. Participants attended a training session before scanning to learn contextual temporal associations among pairs of object images. The fMRI experiment then tested for the emergence of contextual expectation suppression in two separate tasks, respectively with task-relevant and task-irrelevant expectations. Behavioural and neural effects of contextual expectation emerged progressively across the cortical hierarchy as participants attuned themselves to the context: expectation suppression appeared first in the insula, inferior frontal gyrus and posterior parietal cortex, followed by the ventral visual stream, up to early visual cortex. This applied selectively to task-relevant expectations. Taken together, the present results suggest that an insular and frontoparietal executive control network may guide the flexible deployment of contextual sensory expectations for adaptive behaviour in our complex and dynamic world.
  • Gao, Y., Meng, X., Bai, Z., Liu, X., Zhang, M., Li, H., Ding, G., Liu, L., & Booth, J. R. (2022). Left and right arcuate fasciculi are uniquely related to word reading skills in Chinese-English bilingual children. Neurobiology of Language, 3(1), 109-131. doi:10.1162/nol_a_00051.

    Abstract

    Whether reading in different writing systems recruits language-unique or language-universal neural processes is a long-standing debate. Many studies have shown the left arcuate fasciculus (AF) to be involved in phonological and reading processes. In contrast, little is known about the role of the right AF in reading, but some have suggested that it may play a role in visual spatial aspects of reading or the prosodic components of language. The right AF may be more important for reading in Chinese due to its logographic and tonal properties, but this hypothesis has yet to be tested. We recruited a group of Chinese-English bilingual children (8.2 to 12.0 years old) to explore the common and unique relation of reading skill in English and Chinese to fractional anisotropy (FA) in the bilateral AF. We found that both English and Chinese reading skills were positively correlated with FA in the rostral part of the left AF-direct segment. Additionally, English reading skill was positively correlated with FA in the caudal part of the left AF-direct segment, which was also positively correlated with phonological awareness. In contrast, Chinese reading skill was positively correlated with FA in certain segments of the right AF, which was positively correlated with visual spatial ability, but not tone discrimination ability. Our results suggest that there are language universal substrates of reading across languages, but that certain left AF nodes support phonological mechanisms important for reading in English, whereas certain right AF nodes support visual spatial mechanisms important for reading in Chinese.

    Additional information

    supplementary materials
  • Giglio, L., Ostarek, M., Weber, K., & Hagoort, P. (2022). Commonalities and asymmetries in the neurobiological infrastructure for language production and comprehension. Cerebral Cortex, 32(7), 1405-1418. doi:10.1093/cercor/bhab287.

    Abstract

    The neurobiology of sentence production has been largely understudied compared to the neurobiology of sentence comprehension, due to difficulties with experimental control and motion-related artifacts in neuroimaging. We studied the neural response to constituents of increasing size and specifically focused on the similarities and differences in the production and comprehension of the same stimuli. Participants had to either produce or listen to stimuli in a gradient of constituent size based on a visual prompt. Larger constituent sizes engaged the left inferior frontal gyrus (LIFG) and middle temporal gyrus (LMTG) extending to inferior parietal areas in both production and comprehension, confirming that the neural resources for syntactic encoding and decoding are largely overlapping. An ROI analysis in LIFG and LMTG also showed that production elicited larger responses to constituent size than comprehension and that the LMTG was more engaged in comprehension than production, while the LIFG was more engaged in production than comprehension. Finally, increasing constituent size was characterized by later BOLD peaks in comprehension but earlier peaks in production. These results show that syntactic encoding and parsing engage overlapping areas, but there are asymmetries in the engagement of the language network due to the specific requirements of production and comprehension.

    Additional information

    supplementary material
  • Gussenhoven, C., Lu, Y.-A., Lee-Kim, S.-I., Liu, C., Rahmani, H., Riad, T., & Zora, H. (2022). The sequence recall task and lexicality of tone: Exploring tone “deafness”. Frontiers in Psychology, 13: 902569. doi:10.3389/fpsyg.2022.902569.

    Abstract

    Many perception and processing effects of the lexical status of tone have been found in behavioral, psycholinguistic, and neuroscientific research, often pitting varieties of tonal Chinese against non-tonal Germanic languages. While the linguistic and cognitive evidence for lexical tone is therefore beyond dispute, the word prosodic systems of many languages continue to escape the categorizations of typologists. One controversy concerns the existence of a typological class of “pitch accent languages,” another the underlying phonological nature of surface tone contrasts, which in some cases have been claimed to be metrical rather than tonal. We address the question whether the Sequence Recall Task (SRT), which has been shown to discriminate between languages with and without word stress, can distinguish languages with and without lexical tone. Using participants from non-tonal Indonesian, semi-tonal Swedish, and two varieties of tonal Mandarin, we ran SRTs with monosyllabic tonal contrasts to test the hypothesis that high performance in a tonal SRT indicates the lexical status of tone. An additional question concerned the extent to which accuracy scores depended on phonological and phonetic properties of a language’s tone system, like its complexity, the existence of an experimental contrast in a language’s phonology, and the phonetic salience of a contrast. The results suggest that a tonal SRT is not likely to discriminate between tonal and non-tonal languages within a typologically varied group, because of the effects of specific properties of their tone systems. Future research should therefore address the first hypothesis with participants from otherwise similar tonal and non-tonal varieties of the same language, where results from a tonal SRT may make a useful contribution to the typological debate on word prosody.

    Additional information

    also published as book chapter (2023)
  • Hagoort, P. (2022). Reasoning and the brain. In M. Stokhof, & K. Stenning (Eds.), Rules, regularities, randomness. Festschrift for Michiel van Lambalgen (pp. 83-85). Amsterdam: Institute for Logic, Language and Computation.
  • Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P., & De Lange, F. P. (2022). A hierarchy of linguistic predictions during natural language comprehension. Proceedings of the National Academy of Sciences of the United States of America, 119(32): e2201968119. doi:10.1073/pnas.2201968119.

    Abstract

    Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input. However, the role of prediction in language processing remains disputed, with disagreement about both the ubiquity and representational nature of predictions. Here, we address both issues by analyzing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions. First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable neural signatures of predictions about syntactic category (parts of speech), phonemes, and semantics. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.

    Additional information

    supporting information
  • Heilbron, M. (2022). Getting ahead: Prediction as a window into language, and language as a window into the predictive brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Hervais-Adelman, A., Kumar, U., Mishra, R., Tripathi, V., Guleria, A., Singh, J. P., & Huettig, F. (2022). How does literacy affect speech processing? Not by enhancing cortical responses to speech, but by promoting connectivity of acoustic-phonetic and graphomotor cortices. Journal of Neuroscience, 42(47), 8826-8841. doi:10.1523/JNEUROSCI.1125-21.2022.

    Abstract

    Previous research suggests that literacy, specifically learning alphabetic letter-to-phoneme mappings, modifies online speech processing, and enhances brain responses, as indexed by the blood-oxygenation level dependent signal (BOLD), to speech in auditory areas associated with phonological processing (Dehaene et al., 2010). However, alphabets are not the only orthographic systems in use in the world, and hundreds of millions of individuals speak languages that are not written using alphabets. In order to make claims that literacy per se has broad and general consequences for brain responses to speech, one must seek confirmatory evidence from non-alphabetic literacy. To this end, we conducted a longitudinal fMRI study in India probing the effect of literacy in Devanagari, an abugida, on functional connectivity and cerebral responses to speech in 91 variously literate Hindi-speaking male and female human participants. Twenty-two completely illiterate participants underwent six months of reading and writing training. Devanagari literacy increases functional connectivity between acoustic-phonetic and graphomotor brain areas, but we find no evidence that literacy changes brain responses to speech, either in cross-sectional or longitudinal analyses. These findings shows that a dramatic reconfiguration of the neurofunctional substrates of online speech processing may not be a universal result of learning to read, and suggest that the influence of writing on speech processing should also be investigated.
  • Hoeksema, N., Hagoort, P., & Vernes, S. C. (2022). Piecing together the building blocks of the vocal learning bat brain. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 294-296). Nijmegen: Joint Conference on Language Evolution (JCoLE).
  • Holler, J., Drijvers, L., Rafiee, A., & Majid, A. (2022). Embodied space-pitch associations are shaped by language. Cognitive Science, 46(2): e13083. doi:10.1111/cogs.13083.

    Abstract

    Height-pitch associations are claimed to be universal and independent of language, but this claim remains controversial. The present study sheds new light on this debate with a multimodal analysis of individual sound and melody descriptions obtained in an interactive communication paradigm with speakers of Dutch and Farsi. The findings reveal that, in contrast to Dutch speakers, Farsi speakers do not use a height-pitch metaphor consistently in speech. Both Dutch and Farsi speakers’ co-speech gestures did reveal a mapping of higher pitches to higher space and lower pitches to lower space, and this gesture space-pitch mapping tended to co-occur with corresponding spatial words (high-low). However, this mapping was much weaker in Farsi speakers than Dutch speakers. This suggests that cross-linguistic differences shape the conceptualization of pitch and further calls into question the universality of height-pitch associations.

    Additional information

    supporting information
  • Huizeling, E., Arana, S., Hagoort, P., & Schoffelen, J.-M. (2022). Lexical frequency and sentence context influence the brain’s response to single words. Neurobiology of Language, 3(1), 149-179. doi:10.1162/nol_a_00054.

    Abstract

    Typical adults read remarkably quickly. Such fast reading is facilitated by brain processes that are sensitive to both word frequency and contextual constraints. It is debated as to whether these attributes have additive or interactive effects on language processing in the brain. We investigated this issue by analysing existing magnetoencephalography data from 99 participants reading intact and scrambled sentences. Using a cross-validated model comparison scheme, we found that lexical frequency predicted the word-by-word elicited MEG signal in a widespread cortical network, irrespective of sentential context. In contrast, index (ordinal word position) was more strongly encoded in sentence words, in left front-temporal areas. This confirms that frequency influences word processing independently of predictability, and that contextual constraints affect word-by-word brain responses. With a conservative multiple comparisons correction, only the interaction between lexical frequency and surprisal survived, in anterior temporal and frontal cortex, and not between lexical frequency and entropy, nor between lexical frequency and index. However, interestingly, the uncorrected index*frequency interaction revealed an effect in left frontal and temporal cortex that reversed in time and space for intact compared to scrambled sentences. Finally, we provide evidence to suggest that, in sentences, lexical frequency and predictability may independently influence early (<150ms) and late stages of word processing, but interact during later stages of word processing (>150-250ms), thus helping to converge previous contradictory eye-tracking and electrophysiological literature. Current neuro-cognitive models of reading would benefit from accounting for these differing effects of lexical frequency and predictability on different stages of word processing.
  • Huizeling, E., Peeters, D., & Hagoort, P. (2022). Prediction of upcoming speech under fluent and disfluent conditions: Eye tracking evidence from immersive virtual reality. Language, Cognition and Neuroscience, 37(4), 481-508. doi:10.1080/23273798.2021.1994621.

    Abstract

    Traditional experiments indicate that prediction is important for efficient speech processing. In three virtual reality visual world paradigm experiments, we tested whether such findings hold in naturalistic settings (Experiment 1) and provided novel insights into whether disfluencies in speech (repairs/hesitations) inform one’s predictions in rich environments (Experiments 2–3). Experiment 1 supports that listeners predict upcoming speech in naturalistic environments, with higher proportions of anticipatory target fixations in predictable compared to unpredictable trials. In Experiments 2–3, disfluencies reduced anticipatory fixations towards predicted referents, compared to conjunction (Experiment 2) and fluent (Experiment 3) sentences. Unexpectedly, Experiment 2 provided no evidence that participants made new predictions from a repaired verb. Experiment 3 provided novel findings that fixations towards the speaker increase upon hearing a hesitation, supporting current theories of how hesitations influence sentence processing. Together, these findings unpack listeners’ use of visual (objects/speaker) and auditory (speech/disfluencies) information when predicting upcoming words.
  • Lai, V. T., Van Berkum, J. J. A., & Hagoort, P. (2022). Negative affect increases reanalysis of conflicts between discourse context and world knowledge. Frontiers in Communication, 7: 910482. doi:10.3389/fcomm.2022.910482.

    Abstract

    Introduction: Mood is a constant in our daily life and can permeate all levels of cognition. We examined whether and how mood influences the processing of discourse content that is relatively neutral and not loaded with emotion. During discourse processing, readers have to constantly strike a balance between what they know in long term memory and what the current discourse is about. Our general hypothesis is that mood states would affect this balance. We hypothesized that readers in a positive mood would rely more on default world knowledge, whereas readers in a negative mood would be more inclined to analyze the details in the current discourse.

    Methods: Participants were put in a positive and a negative mood via film clips, one week apart. In each session, after mood manipulation, they were presented with sentences in discourse materials. We created sentences such as “With the lights on you can see...” that end with critical words (CWs) “more” or “less”, where general knowledge supports “more”, not “less”. We then embedded each of these sentences in a wider discourse that does/does not support the CWs (a story about driving in the night vs. stargazing). EEG was recorded throughout.

    Results: The results showed that first, mood manipulation was successful in that there was a significant mood difference between sessions. Second, mood did not modulate the N400 effects. Participants in both moods detected outright semantic violations and allowed world knowledge to be overridden by discourse context. Third, mood modulated the LPC (Late Positive Component) effects, distributed in the frontal region. In negative moods, the LPC was sensitive to one-level violation. That is, CWs that were supported by only world knowledge, only discourse, and neither, elicited larger frontal LPCs, in comparison to the condition where CWs were supported by both world knowledge and discourse.

    Discussion: These results suggest that mood does not influence all processes involved in discourse processing. Specifically, mood does not influence lexical-semantic retrieval (N400), but it does influence elaborative processes for sensemaking (P600) during discourse processing. These results advance our understanding of the impact and time course of mood on discourse.

    Additional information

    Table 1.XLSX
  • Levshina, N. (2022). Frequency, informativity and word length: Insights from typologically diverse corpora. Entropy, 24(2): 280. doi:10.3390/e24020280.

    Abstract

    Zipf’s law of abbreviation, which posits a negative correlation between word frequency and length, is one of the most famous and robust cross-linguistic generalizations. At the same time, it has been shown that contextual informativity (average surprisal given previous context) is more strongly correlated with word length, although this tendency is not observed consistently, depending on several methodological choices. The present study examines a more diverse sample of languages than the previous studies (Arabic, Finnish, Hungarian, Indonesian, Russian, Spanish and Turkish). I use large web-based corpora from the Leipzig Corpora Collection to estimate word lengths in UTF-8 characters and in phonemes (for some of the languages), as well as word frequency, informativity given previous word and informativity given next word, applying different methods of bigrams processing. The results show different correlations between word length and the corpus-based measure for different languages. I argue that these differences can be explained by the properties of noun phrases in a language, most importantly, by the order of heads and modifiers and their relative morphological complexity, as well as by orthographic conventions

    Additional information

    datasets
  • Levshina, N., & Hawkins, J. A. (2022). Verb-argument lability and its correlations with other typological parameters. A quantitative corpus-based study. Linguistic Typology at the Crossroads, 2(1), 94-120. doi:10.6092/issn.2785-0943/13861.

    Abstract

    We investigate the correlations between A- and P-lability for verbal arguments with other typological parameters using large, syntactically annotated corpora of online news in 28 languages. To estimate how much lability is observed in a language, we measure associations between Verbs or Verb + Noun combinations and the alternating constructions in which they occur. Our correlational analyses show that high P-lability scores correlate strongly with the following parameters: little or no case marking; weaker associations between lexemes and the grammatical roles A and P; rigid order of Subject and Object; and a high proportion of verb-medial clauses (SVO). Low P-lability correlates with the presence of case marking, stronger associations between nouns and grammatical roles, relatively flexible ordering of Subject and Object, and verb-final order. As for A-lability, it is not correlated with any other parameters. A possible reason is that A-lability is a result of more universal discourse processes, such as deprofiling of the object, and also exhibits numerous lexical and semantic idiosyncrasies. The fact that P-lability is strongly correlated with other parameters can be interpreted as evidence for a more general typology of languages, in which some tend to have highly informative morphosyntactic and lexical cues, whereas others rely predominantly on contextual environment, which is possibly due to fixed word order. We also find that P-lability is more strongly correlated with the other parameters than any of these parameters are with each other, which means that it can be a very useful typological variable.
  • Levshina, N., & Lorenz, D. (2022). Communicative efficiency and the Principle of No Synonymy: Predictability effects and the variation of want to and wanna. Language and Cognition, 14(2), 249-274. doi:10.1017/langcog.2022.7.

    Abstract

    There is ample psycholinguistic evidence that speakers behave efficiently, using shorter and less effortful constructions when the meaning is more predictable, and longer and more effortful ones when it is less predictable. However, the Principle of No Synonymy requires that all formally distinct variants should also be functionally different. The question is how much two related constructions should overlap semantically and pragmatically in order to be used for the purposes of efficient communication. The case study focuses on want to + Infinitive and its reduced variant with wanna, which have different stylistic and sociolinguistic connotations. Bayesian mixed-effects regression modelling based on the spoken part of the British National Corpus reveals a very limited effect of efficiency: predictability increases the chances of the reduced variant only in fast speech. We conclude that efficient use of more and less effortful variants is restricted when two variants are associated with different registers or styles. This paper also pursues a methodological goal regarding missing values in speech corpora. We impute missing data based on the existing values. A comparison of regression models with and without imputed values reveals similar tendencies. This means that imputation is useful for dealing with missing values in corpora.

    Additional information

    supplementary materials
  • Levshina, N. (2022). Semantic maps of causation: New hybrid approaches based on corpora and grammar descriptions. Zeitschrift für Sprachwissenschaft, 41(1), 179-205. doi:10.1515/zfs-2021-2043.

    Abstract

    The present paper discusses connectivity and proximity maps of causative constructions and combines them with different types of typological data. In the first case study, I show how one can create a connectivity map based on a parallel corpus. This allows us to solve many problems, such as incomplete descriptions, inconsistent terminology and the problem of determining the semantic nodes. The second part focuses on proximity maps based on Multidimensional Scaling and compares the most important semantic distinctions, which are inferred from a parallel corpus of film subtitles and from grammar descriptions. The results suggest that corpus-based maps of tokens are more sensitive to cultural and genre-related differences in the prominence of specific causation scenarios than maps based on constructional types, which are described in reference grammars. The grammar-based maps also reveal a less clear structure, which can be due to incomplete semantic descriptions in grammars. Therefore, each approach has its shortcomings, which researchers need to be aware of.
  • Levshina, N. (2022). Corpus-based typology: Applications, challenges and some solutions. Linguistic Typology, 26(1), 129-160. doi:10.1515/lingty-2020-0118.

    Abstract

    Over the last few years, the number of corpora that can be used for language comparison has dramatically increased. The corpora are so diverse in their structure, size and annotation style, that a novice might not know where to start. The present paper charts this new and changing territory, providing a few landmarks, warning signs and safe paths. Although no corpora corpus at present can replace the traditional type of typological data based on language description in reference grammars, they corpora can help with diverse tasks, being particularly well suited for investigating probabilistic and gradient properties of languages and for discovering and interpreting cross-linguistic generalizations based on processing and communicative mechanisms. At the same time, the use of corpora for typological purposes has not only advantages and opportunities, but also numerous challenges. This paper also contains an empirical case study addressing two pertinent problems: the role of text types in language comparison and the problem of the word as a comparative concept.
  • Levshina, N. (2022). Comparing Bayesian and frequentist models of language variation: The case of help + (to) Infinitive. In O. Schützler, & J. Schlüter (Eds.), Data and methods in corpus linguistics – Comparative Approaches (pp. 224-258). Cambridge: Cambridge University Press.
  • Mak, M., Faber, M., & Willems, R. M. (2022). Different routes to liking: How readers arrive at narrative evaluations. Cognitive Research: Principles and implications, 7: 72. doi:10.1186/s41235-022-00419-0.

    Abstract

    When two people read the same story, they might both end up liking it very much. However, this does not necessarily mean that their reasons for liking it were identical. We therefore ask what factors contribute to “liking” a story, and—most importantly—how people vary in this respect. We found that readers like stories because they find them interesting, amusing, suspenseful and/or beautiful. However, the degree to which these components of appreciation were related to how much readers liked stories differed between individuals. Interestingly, the individual slopes of the relationships between many of the components and liking were (positively or negatively) correlated. This indicated, for instance, that individuals displaying a relatively strong relationship between interest and liking, generally display a relatively weak relationship between sadness and liking. The individual differences in the strengths of the relationships between the components and liking were not related to individual differences in expertize, a characteristic strongly associated with aesthetic appreciation of visual art. Our work illustrates that it is important to take into consideration the fact that individuals differ in how they arrive at their evaluation of literary stories, and that it is possible to quantify these differences in empirical experiments. Our work suggests that future research should be careful about “overfitting” theories of aesthetic appreciation to an “idealized reader,” but rather take into consideration variations across individuals in the reason for liking a particular story.
  • Misersky, J. (2022). About time: Exploring the role of grammatical aspect in event cognition. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Misersky, J., Peeters, D., & Flecken, M. (2022). The potential of immersive virtual reality for the study of event perception. Frontiers in Virtual Reality, 3: 697934. doi:10.3389/frvir.2022.697934.

    Abstract

    In everyday life, we actively engage in different activities from a first-person perspective. However, experimental psychological research in the field of event perception is often limited to relatively passive, third-person computer-based paradigms. In the present study, we tested the feasibility of using immersive virtual reality in combination with eye tracking with participants in active motion. Behavioral research has shown that speakers of aspectual and non-aspectual languages attend to goals (endpoints) in motion events differently, with speakers of non-aspectual languages showing relatively more attention to goals (endpoint bias). In the current study, native speakers of German (non-aspectual) and English (aspectual) walked on a treadmill across 3-D terrains in VR, while their eye gaze was continuously tracked. Participants encountered landmark objects on the side of the road, and potential endpoint objects at the end of it. Using growth curve analysis to analyze fixation patterns over time, we found no differences in eye gaze behavior between German and English speakers. This absence of cross-linguistic differences was also observed in behavioral tasks with the same participants. Methodologically, based on the quality of the data, we conclude that our dynamic eye-tracking setup can be reliably used to study what people look at while moving through rich and dynamic environments that resemble the real world.
  • Montero-Melis, G., Van Paridon, J., Ostarek, M., & Bylund, E. (2022). No evidence for embodiment: The motor system is not needed to keep action words in working memory. Cortex, 150, 108-125. doi:10.1016/j.cortex.2022.02.006.

    Abstract

    Increasing evidence implicates the sensorimotor systems with high-level cognition, but the extent to which these systems play a functional role remains debated. Using an elegant design, Shebani and Pulvermüller (2013) reported that carrying out a demanding rhythmic task with the hands led to selective impairment of working memory for hand-related words (e.g., clap), while carrying out the same task with the feet led to selective memory impairment for foot-related words (e.g., kick). Such a striking double dissociation is acknowledged even by critics to constitute strong evidence for an embodied account of working memory. Here, we report on an attempt at a direct replication of this important finding. We followed a sequential sampling design and stopped data collection at N=77 (more than five times the original sample size), at which point the evidence for the lack of the critical selective interference effect was very strong (BF01 = 91). This finding constitutes strong evidence against a functional contribution of the motor system to keeping action words in working memory. Our finding fits into the larger emerging picture in the field of embodied cognition that sensorimotor simulations are neither required nor automatic in high-level cognitive processes, but that they may play a role depending on the task. Importantly, we urge researchers to engage in transparent, high-powered, and fully pre-registered experiments like the present one to ensure the field advances on a solid basis.
  • Morey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W. and 25 moreMorey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W., Davis, J. D., Evers, E., Girard, S., Harter, D., Hartung, F., Herrera, E., Huettig, F., Humphries, S., Juanchich, M., Kühne, K., Lu, S., Lynes, T., Masson, M. E. J., Ostarek, M., Pessers, S., Reglin, R., Steegen, S., Thiessen, E. D., Thomas, L. E., Trott, S., Vandekerckhove, J., Vanpaemel, W., Vlachou, M., Williams, K., & Ziv-Crispel, N. (2022). A pre-registered, multi-lab non-replication of the Action-sentence Compatibility Effect (ACE). Psychonomic Bulletin & Review, 29, 613-626. doi:10.3758/s13423-021-01927-8.

    Abstract

    The Action-sentence Compatibility Effect (ACE) is a well-known demonstration of the role of motor activity in the comprehension of language. Participants are asked to make sensibility judgments on sentences by producing movements toward the body or away from the body. The ACE is the finding that movements are faster when the direction of the movement (e.g., toward) matches the direction of the action in the to-be-judged sentence (e.g., Art gave you the pen describes action toward you). We report on a pre- registered, multi-lab replication of one version of the ACE. The results show that none of the 18 labs involved in the study observed a reliable ACE, and that the meta-analytic estimate of the size of the ACE was essentially zero.
  • Murphy, E., Woolnough, O., Rollo, P. S., Roccaforte, Z., Segaert, K., Hagoort, P., & Tandon, N. (2022). Minimal phrase composition revealed by intracranial recordings. The Journal of Neuroscience, 42(15), 3216-3227. doi:10.1523/JNEUROSCI.1575-21.2022.

    Abstract

    The ability to comprehend phrases is an essential integrative property of the brain. Here we evaluate the neural processes that enable the transition from single word processing to a minimal compositional scheme. Previous research has reported conflicting timing effects of composition, and disagreement persists with respect to inferior frontal and posterior temporal contributions. To address these issues, 19 patients (10 male, 19 female) implanted with penetrating depth or surface subdural intracranial electrodes heard auditory recordings of adjective-noun, pseudoword-noun and adjective-pseudoword phrases and judged whether the phrase matched a picture. Stimulus-dependent alterations in broadband gamma activity, low frequency power and phase-locking values across the language-dominant left hemisphere were derived. This revealed a mosaic located on the lower bank of the posterior superior temporal sulcus (pSTS), in which closely neighboring cortical sites displayed exclusive sensitivity to either lexicality or phrase structure, but not both. Distinct timings were found for effects of phrase composition (210–300 ms) and pseudoword processing (approximately 300–700 ms), and these were localized to neighboring electrodes in pSTS. The pars triangularis and temporal pole encoded anticipation of composition in broadband low frequencies, and both regions exhibited greater functional connectivity with pSTS during phrase composition. Our results suggest that the pSTS is a highly specialized region comprised of sparsely interwoven heterogeneous constituents that encodes both lower and higher level linguistic features. This hub in pSTS for minimal phrase processing may form the neural basis for the human-specific computational capacity for forming hierarchically organized linguistic structures.
  • Poort, E. D., & Rodd, J. M. (2022). Cross-lingual priming of cognates and interlingual homographs from L2 to L1. Glossa Psycholinguistics, 1(1): 11. doi:10.5070/G601147.

    Abstract

    Many word forms exist in multiple languages, and can have either the same meaning (cognates) or a different meaning (interlingual homographs). Previous experiments have shown that processing of interlingual homographs in a bilingual’s second language is slowed down by recent experience with these words in the bilingual’s native language, while processing of cognates can be speeded up (Poort et al., 2016; Poort & Rodd, 2019a). The current experiment replicated Poort and Rodd’s (2019a) Experiment 2 but switched the direction of priming: Dutch–English bilinguals (n = 106) made Dutch semantic relatedness judgements to probes related to cognates (n = 50), interlingual homographs (n = 50) and translation equivalents (n = 50) they had seen 15 minutes previously embedded in English sentences. The current experiment is the first to show that a single encounter with an interlingual homograph in one’s second language can also affect subsequent processing in one’s native language. Cross-lingual priming did not affect the cognates. The experiment also extended Poort and Rodd (2019a)’s finding of a large interlingual homograph inhibition effect in a semantic relatedness task in the participants’ L2 to their L1, but again found no evidence for a cognate facilitation effect in a semantic relatedness task. These findings extend the growing literature that emphasises the high level of interaction in a bilingual’s mental lexicon, by demonstrating the influence of L2 experience on the processing of L1 words. Data, scripts, materials and pre-registration available via https://osf.io/2swyg/?view_only=b2ba2e627f6f4eaeac87edab2b59b236.
  • Poulton, V. R., & Nieuwland, M. S. (2022). Can you hear what’s coming? Failure to replicate ERP evidence for phonological prediction. Neurobiology of Language, 3(4), 556 -574. doi:10.1162/nol_a_00078.

    Abstract

    Prediction-based theories of language comprehension assume that listeners predict both the meaning and phonological form of likely upcoming words. In alleged event-related potential (ERP) demonstrations of phonological prediction, prediction-mismatching words elicit a phonological mismatch negativity (PMN), a frontocentral negativity that precedes the centroparietal N400 component. However, classification and replicability of the PMN has proven controversial, with ongoing debate on whether the PMN is a distinct component or merely an early part of the N400. In this electroencephalography (EEG) study, we therefore attempted to replicate the PMN effect and its separability from the N400, using a participant sample size (N = 48) that was more than double that of previous studies. Participants listened to sentences containing either a predictable word or an unpredictable word with/without phonological overlap with the predictable word. Preregistered analyses revealed a widely distributed negative-going ERP in response to unpredictable words in both the early (150–250 ms) and the N400 (300–500 ms) time windows. Bayes factor analysis yielded moderate evidence against a different scalp distribution of the effects in the two time windows. Although our findings do not speak against phonological prediction during sentence comprehension, they do speak against the PMN effect specifically as a marker of phonological prediction mismatch. Instead of an PMN effect, our results demonstrate the early onset of the auditory N400 effect associated with unpredictable words. Our failure to replicate further highlights the risk associated with commonly employed data-contingent analyses (e.g., analyses involving time windows or electrodes that were selected based on visual inspection) and small sample sizes in the cognitive neuroscience of language.
  • Preisig, B., & Hervais-Adelman, A. (2022). The predictive value of individual electric field modeling for transcranial alternating current stimulation induced brain modulation. Frontiers in Cellular Neuroscience, 16: 818703. doi:10.3389/fncel.2022.818703.

    Abstract

    There is considerable individual variability in the reported effectiveness of non-invasive brain stimulation. This variability has often been ascribed to differences in the neuroanatomy and resulting differences in the induced electric field inside the brain. In this study, we addressed the question whether individual differences in the induced electric field can predict the neurophysiological and behavioral consequences of gamma band tACS. In a within-subject experiment, bi-hemispheric gamma band tACS and sham stimulation was applied in alternating blocks to the participants’ superior temporal lobe, while task-evoked auditory brain activity was measured with concurrent functional magnetic resonance imaging (fMRI) and a dichotic listening task. Gamma tACS was applied with different interhemispheric phase lags. In a recent study, we could show that anti-phase tACS (180° interhemispheric phase lag), but not in-phase tACS (0° interhemispheric phase lag), selectively modulates interhemispheric brain connectivity. Using a T1 structural image of each participant’s brain, an individual simulation of the induced electric field was computed. From these simulations, we derived two predictor variables: maximal strength (average of the 10,000 voxels with largest electric field values) and precision of the electric field (spatial correlation between the electric field and the task evoked brain activity during sham stimulation). We found considerable variability in the individual strength and precision of the electric fields. Importantly, the strength of the electric field over the right hemisphere predicted individual differences of tACS induced brain connectivity changes. Moreover, we found in both hemispheres a statistical trend for the effect of electric field strength on tACS induced BOLD signal changes. In contrast, the precision of the electric field did not predict any neurophysiological measure. Further, neither strength, nor precision predicted interhemispheric integration. In conclusion, we found evidence for the dose-response relationship between individual differences in electric fields and tACS induced activity and connectivity changes in concurrent fMRI. However, the fact that this relationship was stronger in the right hemisphere suggests that the relationship between the electric field parameters, neurophysiology, and behavior may be more complex for bi-hemispheric tACS.
  • Preisig, B., Riecke, L., & Hervais-Adelman, A. (2022). Speech sound categorization: The contribution of non-auditory and auditory cortical regions. NeuroImage, 258: 119375. doi:10.1016/j.neuroimage.2022.119375.

    Abstract

    Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with disambiguating acoustic feature (third formant, F3) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in left perisylvian regions (STG, SMG), left inferior frontal regions (vMC, IFG, AI), left supplementary motor cortex (SMA/pre-SMA), and right motor and somatosensory regions (M1/S1) represent listeners’ syllable report irrespective of stimulus acoustics. Most of these regions are outside of what is traditionally regarded as auditory or phonological processing areas. Our results indicate that the process of speech sound categorization implicates decision-making mechanisms and auditory-motor transformations.

    Additional information

    figures and table
  • Slivac, K. (2022). The enlanguaged brain: Cognitive and neural mechanisms of linguistic influence on perception. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Udden, J., Hulten, A., Schoffelen, J.-M., Lam, N. H. L., Harbusch, K., Van den Bosch, A., Kempen, G., Petersson, K. M., & Hagoort, P. (2022). Supramodal sentence processing in the human brain: fMRI evidence for the influence of syntactic complexity in more than 200 participants. Neurobiology of Language, 3(4), 575-598. doi:10.1162/nol_a_00076.

    Abstract

    This study investigated two questions. One is: To what degree is sentence processing beyond single words independent of the input modality (speech vs. reading)? The second question is: Which parts of the network recruited by both modalities is sensitive to syntactic complexity? These questions were investigated by having more than 200 participants read or listen to well-formed sentences or series of unconnected words. A largely left-hemisphere frontotemporoparietal network was found to be supramodal in nature, i.e., independent of input modality. In addition, the left inferior frontal gyrus (LIFG) and the left posterior middle temporal gyrus (LpMTG) were most clearly associated with left-branching complexity. The left anterior temporal lobe (LaTL) showed the greatest sensitivity to sentences that differed in right-branching complexity. Moreover, activity in LIFG and LpMTG increased from sentence onset to end, in parallel with an increase of the left-branching complexity. While LIFG, bilateral anterior temporal lobe, posterior MTG, and left inferior parietal lobe (LIPL) all contribute to the supramodal unification processes, the results suggest that these regions differ in their respective contributions to syntactic complexity related processing. The consequences of these findings for neurobiological models of language processing are discussed.

    Additional information

    supporting information
  • Verdonschot, R. G., Phu'o'ng, H. T. L., & Tamaoka, K. (2022). Phonological encoding in Vietnamese: An experimental investigation. Quarterly Journal of Experimental Psychology, 75(7), 1355-1366. doi:10.1177/17470218211053244.

    Abstract

    In English, Dutch, and other Germanic languages the initial phonological unit used in word production has been shown to be the phoneme; conversely, others have revealed that in Chinese this is the atonal syllable and in Japanese the mora. The current paper is, to our knowledge, the first to report chronometric data on Vietnamese phonological encoding. Vietnamese, a tonal language, is of interest as, despite its Austroasiatic roots, it has clear similarities with Chinese through extended contact over a prolonged period. Four experiments (i.e., masked priming, phonological Stroop, picture naming with written distractors, picture naming with auditory distractors) have been conducted to investigate Vietnamese phonological encoding. Results show that in all four experiments both onset effects as well as whole syllable effects emerge. This indicates that the fundamental phonological encoding unit during Vietnamese language production is the phoneme despite its apparent similarities to Chinese. This result might have emerged due to tone assignment being a qualitatively different process in Vietnamese compared to Chinese.
  • Vernes, S. C., Devanna, P., Hörpel, S. G., Alvarez van Tussenbroek, I., Firzlaff, U., Hagoort, P., Hiller, M., Hoeksema, N., Hughes, G. M., Lavrichenko, K., Mengede, J., Morales, A. E., & Wiesmann, M. (2022). The pale spear‐nosed bat: A neuromolecular and transgenic model for vocal learning. Annals of the New York Academy of Sciences, 1517, 125-142. doi:10.1111/nyas.14884.

    Abstract

    Vocal learning, the ability to produce modified vocalizations via learning from acoustic signals, is a key trait in the evolution of speech. While extensively studied in songbirds, mammalian models for vocal learning are rare. Bats present a promising study system given their gregarious natures, small size, and the ability of some species to be maintained in captive colonies. We utilize the pale spear-nosed bat (Phyllostomus discolor) and report advances in establishing this species as a tractable model for understanding vocal learning. We have taken an interdisciplinary approach, aiming to provide an integrated understanding across genomics (Part I), neurobiology (Part II), and transgenics (Part III). In Part I, we generated new, high-quality genome annotations of coding genes and noncoding microRNAs to facilitate functional and evolutionary studies. In Part II, we traced connections between auditory-related brain regions and reported neuroimaging to explore the structure of the brain and gene expression patterns to highlight brain regions. In Part III, we created the first successful transgenic bats by manipulating the expression of FoxP2, a speech-related gene. These interdisciplinary approaches are facilitating a mechanistic and evolutionary understanding of mammalian vocal learning and can also contribute to other areas of investigation that utilize P. discolor or bats as study species.

    Additional information

    supplementary materials
  • Wanner-Kawahara, J., Yoshihara, M., Lupker, S. J., Verdonschot, R. G., & Nakayama, M. (2022). Morphological priming effects in L2 English verbs for Japanese-English bilinguals. Frontiers in Psychology, 13: 742965. doi:10.3389/fpsyg.2022.742965.

    Abstract

    For native (L1) English readers, masked presentations of past-tense verb primes (e.g., fell and looked) produce faster lexical decision latencies to their present-tense targets (e.g., FALL and LOOK) than orthographically related (e.g., fill and loose) or unrelated (e.g., master and bank) primes. This facilitation observed with morphologically related prime-target pairs (morphological priming) is generally taken as evidence for strong connections based on morphological relationships in the L1 lexicon. It is unclear, however, if similar, morphologically based, connections develop in non-native (L2) lexicons. Several earlier studies with L2 English readers have reported mixed results. The present experiments examine whether past-tense verb primes (both regular and irregular verbs) significantly facilitate target lexical decisions for Japanese-English bilinguals beyond any facilitation provided by prime-target orthographic similarity. Overall, past-tense verb primes facilitated lexical decisions to their present-tense targets relative to both orthographically related and unrelated primes. Replicating previous masked priming experiments with L2 readers, orthographically related primes also facilitated target recognition relative to unrelated primes, confirming that orthographic similarity facilitates L2 target recognition. The additional facilitation from past-tense verb primes beyond that provided by orthographic primes suggests that, in the L2 English lexicon, connections based on morphological relationships develop in a way that is similar to how they develop in the L1 English lexicon even though the connections and processing of lower level, lexical/orthographic information may differ. Further analyses involving L2 proficiency revealed that as L2 proficiency increased, orthographic facilitation was reduced, indicating that there is a decrease in the fuzziness in orthographic representations in the L2 lexicon with increased proficiency.

    Additional information

    supplementary material
  • Wilms, V., Drijvers, L., & Brouwer, S. (2022). The Effects of Iconic Gestures and Babble Language on Word Intelligibility in Sentence Context. Journal of Speech, Language, and Hearing Research, 65, 1822-1838. doi:10.1044/2022\_JSLHR-21-00387.

    Abstract

    Purpose:This study investigated to what extent iconic co-speech gestures helpword intelligibility in sentence context in two different linguistic maskers (nativevs. foreign). It was hypothesized that sentence recognition improves with thepresence of iconic co-speech gestures and with foreign compared to nativebabble.Method:Thirty-two native Dutch participants performed a Dutch word recogni-tion task in context in which they were presented with videos in which anactress uttered short Dutch sentences (e.g.,Ze begint te openen,“She starts toopen”). Participants were presented with a total of six audiovisual conditions: nobackground noise (i.e., clear condition) without gesture, no background noise withgesture, French babble without gesture, French babble with gesture, Dutch bab-ble without gesture, and Dutch babble with gesture; and they were asked to typedown what was said by the Dutch actress. The accurate identification of theaction verbs at the end of the target sentences was measured.Results:The results demonstrated that performance on the task was better inthe gesture compared to the nongesture conditions (i.e., gesture enhancementeffect). In addition, performance was better in French babble than in Dutchbabble.Conclusions:Listeners benefit from iconic co-speech gestures during commu-nication and from foreign background speech compared to native. Theseinsights into multimodal communication may be valuable to everyone whoengages in multimodal communication and especially to a public who oftenworks in public places where competing speech is present in the background.
  • Yang, J. (2022). Discovering the units in language cognition: From empirical evidence to a computational model. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Yang, J., Van den Bosch, A., & Frank, S. L. (2022). Unsupervised text segmentation predicts eye fixations during reading. Frontiers in Artificial Intelligence, 5: 731615. doi:10.3389/frai.2022.731615.

    Abstract

    Words typically form the basis of psycholinguistic and computational linguistic studies about sentence processing. However, recent evidence shows the basic units during reading, i.e., the items in the mental lexicon, are not always words, but could also be sub-word and supra-word units. To recognize these units, human readers require a cognitive mechanism to learn and detect them. In this paper, we assume eye fixations during reading reveal the locations of the cognitive units, and that the cognitive units are analogous with the text units discovered by unsupervised segmentation models. We predict eye fixations by model-segmented units on both English and Dutch text. The results show the model-segmented units predict eye fixations better than word units. This finding suggests that the predictive performance of model-segmented units indicates their plausibility as cognitive units. The Less-is-Better (LiB) model, which finds the units that minimize both long-term and working memory load, offers advantages both in terms of prediction score and efficiency among alternative models. Our results also suggest that modeling the least-effort principle for the management of long-term and working memory can lead to inferring cognitive units. Overall, the study supports the theory that the mental lexicon stores not only words but also smaller and larger units, suggests that fixation locations during reading depend on these units, and shows that unsupervised segmentation models can discover these units.
  • Zora, H., Gussenhoven, C., Tremblay, A., & Liu, F. (2022). Editorial: Crosstalk between intonation and lexical tones: Linguistic, cognitive and neuroscience perspectives. Frontiers in Psychology, 13: 1101499. doi:10.3389/fpsyg.2022.1101499.

    Abstract

    The interplay between categorical and continuous aspects of the speech signal remains central and yet controversial in the fields of phonetics and phonology. The division between phonological abstractions and phonetic variations has been particularly relevant to the unraveling of diverse communicative functions of pitch in the domain of prosody. Pitch influences vocal communication in two major but fundamentally different ways, and lexical and intonational tones exquisitely capture these functions. Lexical tone contrasts convey lexical meanings as well as derivational meanings at the word level and are grammatically encoded as discrete structures. Intonational tones, on the other hand, signal post-lexical meanings at the phrasal level and typically allow gradient pragmatic variations. Since categorical and gradient uses of pitch are ubiquitous and closely intertwined in their physiological and psychological processes, further research is warranted for a more detailed understanding of their structural and functional characterisations. This Research Topic addresses this matter from a wide range of perspectives, including first and second language acquisition, speech production and perception, structural and functional diversity, and working with distinct languages and experimental measures. In the following, we provide a short overview of the contributions submitted to this topic

    Additional information

    also published as book chapter (2023)
  • Araújo, S., Faísca, L., Bramão, I., Reis, A., & Petersson, K. M. (2015). Lexical and sublexical orthographic processing: An ERP study with skilled and dyslexic adult readers. Brain and Language, 141, 16-27. doi:10.1016/j.bandl.2014.11.007.

    Abstract

    This ERP study investigated the cognitive nature of the P1–N1 components during orthographic processing. We used an implicit reading task with various types of stimuli involving different amounts of sublexical or lexical orthographic processing (words, pseudohomophones, pseudowords, nonwords, and symbols), and tested average and dyslexic readers. An orthographic regularity effect (pseudowords– nonwords contrast) was observed in the average but not in the dyslexic group. This suggests an early sensitivity to the dependencies among letters in word-forms that reflect orthographic structure, while the dyslexic brain apparently fails to be appropriately sensitive to these complex features. Moreover, in the adults the N1-response may already reflect lexical access: (i) the N1 was sensitive to the familiar vs. less familiar orthographic sequence contrast; (ii) and early effects of the phonological form (words-pseudohomophones contrast) were also found. Finally, the later N320 component was attenuated in the dyslexics, suggesting suboptimal processing in later stages of phonological analysis.
  • Araújo, S., Reis, A., Petersson, K. M., & Faísca, L. (2015). Rapid automatized naming and reading performance: A meta-analysis. Journal of Educational Psychology, 107(3), 868-883. doi:10.1037/edu0000006.

    Abstract

    Evidence that rapid naming skill is associated with reading ability has become increasingly prevalent in recent years. However, there is considerable variation in the literature concerning the magnitude of this relationship. The objective of the present study was to provide a comprehensive analysis of the evidence on the relationship between rapid automatized naming (RAN) and reading performance. To this end, we conducted a meta-analysis of the correlational relationship between these 2 constructs to (a) determine the overall strength of the RAN–reading association and (b) identify variables that systematically moderate this relationship. A random-effects model analysis of data from 137 studies (857 effect sizes; 28,826 participants) indicated a moderate-to-strong relationship between RAN and reading performance (r = .43, I2 = 68.40). Further analyses revealed that RAN contributes to the 4 measures of reading (word reading, text reading, non-word reading, and reading comprehension), but higher coefficients emerged in favor of real word reading and text reading. RAN stimulus type and type of reading score were the factors with the greatest moderator effect on the magnitude of the RAN–reading relationship. The consistency of orthography and the subjects’ grade level were also found to impact this relationship, although the effect was contingent on reading outcome. It was less evident whether the subjects’ reading proficiency played a role in the relationship. Implications for future studies are discussed.
  • Asaridou, S. S., Hagoort, P., & McQueen, J. M. (2015). Effects of early bilingual experience with a tone and a non-tone language on speech-music. PLoS One, 10(12): e0144225. doi:10.1371/journal.pone.0144225.

    Abstract

    We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch). We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval) or phonologically (based on the identity of the sung vowel). We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly.

    Additional information

    Data Availability
  • Asaridou, S. S. (2015). An ear for pitch: On the effects of experience and aptitude in processing pitch in language and music. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Baggio, G., van Lambalgen, M., & Hagoort, P. (2015). Logic as Marr's computational level: Four case studies. Topics in Cognitive Science, 7, 287-298. doi:10.1111/tops.12125.

    Abstract

    We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition.
  • Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. (2015). Tracking lexical consolidation with ERPs: Lexical and semantic-priming effects on N400 and LPC responses to newly-learned words. Neuropsychologia, 79, 33-41. doi:10.1016/j.neuropsychologia.2015.10.020.
  • Bašnákova, J., Van Berkum, J. J. A., Weber, K., & Hagoort, P. (2015). A job interview in the MRI scanner: How does indirectness affect addressees and overhearers? Neuropsychologia, 76, 79-91. doi:10.1016/j.neuropsychologia.2015.03.030.

    Abstract

    In using language, people not only exchange information, but also navigate their social world – for example, they can express themselves indirectly to avoid losing face. In this functional magnetic resonance imaging study, we investigated the neural correlates of interpreting face-saving indirect replies, in a situation where participants only overheard the replies as part of a conversation between two other people, as well as in a situation where the participants were directly addressed themselves. We created a fictional job interview context where indirect replies serve as a natural communicative strategy to attenuate one’s shortcomings, and asked fMRI participants to either pose scripted questions and receive answers from three putative job candidates (addressee condition) or to listen to someone else interview the same candidates (overhearer condition). In both cases, the need to evaluate the candidate ensured that participants had an active interest in comprehending the replies. Relative to direct replies, face-saving indirect replies increased activation in medial prefrontal cortex, bilateral temporo-parietal junction (TPJ), bilateral inferior frontal gyrus and bilateral middle temporal gyrus, in active overhearers and active addressees alike, with similar effect size, and comparable to findings obtained in an earlier passive listening study (Bašnáková et al., 2013). In contrast, indirectness effects in bilateral anterior insula and pregenual ACC, two regions implicated in emotional salience and empathy, were reliably stronger in addressees than in active overhearers. Our findings indicate that understanding face-saving indirect language requires additional cognitive perspective-taking and other discourse-relevant cognitive processing, to a comparable extent in active overhearers and addressees. Furthermore, they indicate that face-saving indirect language draws upon affective systems more in addressees than in overhearers, presumably because the addressee is the one being managed by a face-saving reply. In all, face-saving indirectness provides a window on the cognitive as well as affect-related neural systems involved in human communication.
  • Bastiaansen, M. C. M., & Hagoort, P. (2015). Frequency-based segregation of syntactic and semantic unification during online sentence level language comprehension. Journal of Cognitive Neuroscience, 27(11), 2095-2107. doi:10.1162/jocn_a_00829.

    Abstract

    During sentence level language comprehension, semantic and syntactic unification are functionally distinct operations. Nevertheless, both recruit roughly the same brain areas (spatially overlapping networks in the left frontotemporal cortex) and happen at the same time (in the first few hundred milliseconds after word onset). We tested the hypothesis that semantic and syntactic unification are segregated by means of neuronal synchronization of the functionally relevant networks in different frequency ranges: gamma (40 Hz and up) for semantic unification and lower beta (10–20 Hz) for syntactic unification. EEG power changes were quantified as participants read either correct sentences, syntactically correct though meaningless sentences (syntactic prose), or sentences that did not contain any syntactic structure (random word lists). Other sentences contained either a semantic anomaly or a syntactic violation at a critical word in the sentence. Larger EEG gamma-band power was observed for semantically coherent than for semantically anomalous sentences. Similarly, beta-band power was larger for syntactically correct sentences than for incorrect ones. These results confirm the existence of a functional dissociation in EEG oscillatory dynamics during sentence level language comprehension that is compatible with the notion of a frequency-based segregation of syntactic and semantic unification.
  • Bastos, A. M., Vezoli, J., Bosman, C. A., Schoffelen, J.-M., Oostenveld, R., Dowdall, J. R., De Weerd, P., Kennedy, H., & Fries, P. (2015). Visual areas exert feedforward and feedback influences through distinct frequency channels. Neuron, 85(2), 390-401. doi:10.1016/j.neuron.2014.12.018.

    Abstract

    Visual cortical areas subserve cognitive functions by interacting in both feedforward and feedback directions. While feedforward influences convey sensory signals, feedback influences modulate feedforward signaling according to the current behavioral context. We investigated whether these interareal influences are subserved differentially by rhythmic synchronization. We correlated frequency-specific directed influences among 28 pairs of visual areas with anatomical metrics of the feedforward or feedback character of the respective interareal projections. This revealed that in the primate visual system, feedforward influences are carried by theta-band ( approximately 4 Hz) and gamma-band ( approximately 60-80 Hz) synchronization, and feedback influences by beta-band ( approximately 14-18 Hz) synchronization. The functional directed influences constrain a functional hierarchy similar to the anatomical hierarchy, but exhibiting task-dependent dynamic changes in particular with regard to the hierarchical positions of frontal areas. Our results demonstrate that feedforward and feedback signaling use distinct frequency channels, suggesting that they subserve differential communication requirements.
  • Chang, F., Bauman, M., Pappert, S., & Fitz, H. (2015). Do lemmas speak German?: A verb position effect in German structural priming. Cognitive Science, 39(5), 1113-1130. doi:10.1111/cogs.12184.

    Abstract

    Lexicalized theories of syntax often assume that verb-structure regularities are mediated by lemmas, which abstract over variation in verb tense and aspect. German syntax seems to challenge this assumption, because verb position depends on tense and aspect. To examine how German speakers link these elements, a structural priming study was performed which varied syntactic structure, verb position (encoded by tense and aspect), and verb overlap. Abstract structural priming was found, both within and across verb position, but priming was larger when the verb position was the same between prime and target. Priming was boosted by verb overlap, but there was no interaction with verb position. The results can be explained by a lemma model where tense and aspect are linked to structural choices in German. Since the architecture of this lemma model is not consistent with results from English, a connectionist model was developed which could explain the cross-linguistic variation in the production system. Together, these findings support the view that language learning plays an important role in determining the nature of structural priming in different languages
  • Cronin, K. A., Acheson, D. J., Hernández, P., & Sánchez, A. (2015). Hierarchy is Detrimental for Human Cooperation. Scientific Reports, 5: 18634. doi:10.1038/srep18634.

    Abstract

    Studies of animal behavior consistently demonstrate that the social environment impacts cooperation, yet the effect of social dynamics has been largely excluded from studies of human cooperation. Here, we introduce a novel approach inspired by nonhuman primate research to address how social hierarchies impact human cooperation. Participants competed to earn hierarchy positions and then could cooperate with another individual in the hierarchy by investing in a common effort. Cooperation was achieved if the combined investments exceeded a threshold, and the higher ranked individual distributed the spoils unless control was contested by the partner. Compared to a condition lacking hierarchy, cooperation declined in the presence of a hierarchy due to a decrease in investment by lower ranked individuals. Furthermore, hierarchy was detrimental to cooperation regardless of whether it was earned or arbitrary. These findings mirror results from nonhuman primates and demonstrate that hierarchies are detrimental to cooperation. However, these results deviate from nonhuman primate findings by demonstrating that human behavior is responsive to changing hierarchical structures and suggests partnership dynamics that may improve cooperation. This work introduces a controlled way to investigate the social influences on human behavior, and demonstrates the evolutionary continuity of human behavior with other primate species.
  • Flecken, M., Carroll, M., Weimar, K., & Von Stutterheim, C. (2015). Driving along the road or heading for the village? Conceptual differences underlying motion event encoding in French, German, and French-German L2 users. Modern Language Journal, 99(S1), 100-122. doi:10.1111/j.1540-4781.2015.12181.x.

    Abstract

    The typological contrast between verb- and satellite-framed languages (Talmy, 1985) has set the basis for many empirical studies on L2 acquisition. The current analysis goes beyond this typology by looking in detail at the conceptualization of the path of motion in a motion event. We take as a starting point the cognitive salience of specific elements of motion events that are relevant when conceptualizing space. When expressing direction in French, specific spatial relations involving the entity in motion (its alignment and its distance toward a [potential] endpoint) are relevant, given a variety of path verbs in the lexicon expressing this information (e.g., se diriger vers, avancer to direct oneself toward,' to advance'). This is not the case in German (manner verbs in the lexicon mainly). In German, spatial information is packaged in adjuncts and particles and the path of motion is typically structured via features of the ground (entlanglaufen/fahren to walk/drive along') or endpoints (to walk/drive to/toward'). We investigate those fundamental differences in spatial conceptualization in French and German, as reflected in pre-articulatory patterns of attention allocation (measured with eye tracking) to moving entities and endpoints in motion scenes in an event description task. Our focus is on spatial conceptualization in an L2 (French L2 users of German), analyzing the extent to which these L2 users display target-like patterns or traces of L1 conceptualization transfer. Findings show that, in line with directional concepts expressed in verbs, L1 French speakers allocate more attention to entities in motion and endpoints, before utterance onset, than L1 German speakers do. The L2 German speakers pattern with L1 German speakers in the use of manner verbs, but they have not fully acquired the spatial concepts and means that structure the path of motion in the L2. This is reflected in pre-articulatory attention allocation patterns, according to which the L2 speakers pattern with native speakers of their L1 (French). The findings show a continued deep entrenchment of L1-based processing patterns and spatial frames of reference when speakers prepare for speech in an L2
  • Flecken, M., Walbert, K., & Dijkstra, T. (2015). ‘Right now, Sophie ∗swims in the pool?!’: Brain potentials of grammatical aspect processing. Frontiers in Psychology, 6: 1764. doi:10.3389/fpsyg.2015.01764.

    Abstract

    We investigated whether brain potentials of grammatical aspect processing resemble semantic or morpho-syntactic processing, or whether they instead are characterized by an entirely distinct pattern in the same individuals. We studied aspect from the perspective of agreement between the temporal information in the context (temporal adverbials, e.g., Right now) and a morpho-syntactic marker of grammatical aspect (e.g., progressive is swimming). Participants read questions providing a temporal context that was progressive (What is Sophie doing in the pool right now?) or habitual (What does Sophie do in the pool every Monday?). Following a lead-in sentence context such as Right now, Sophie…, we measured event-related brain potentials (ERPs) time-locked to verb phrases in four different conditions, e.g., (a) is swimming (control); (b) ∗is cooking (semantic violation); (c) ∗are swimming (morpho-syntactic violation); or (d)?swims (aspect mismatch); …in the pool.” The collected ERPs show typical N400 and P600 effects for semantics and morpho-syntax, while aspect processing elicited an Early Negativity (250–350 ms). The aspect-related Negativity was short-lived and had a central scalp distribution with an anterior onset. This differentiates it not only from the semantic N400 effect, but also from the typical LAN (Left Anterior Negativity), that is frequently reported for various types of agreement processing. Moreover, aspect processing did not show a clear P600 modulation. We argue that the specific context for each item in this experiment provided a trigger for agreement checking with temporal information encoded on the verb, i.e., morphological aspect marking. The aspect-related Negativity obtained for aspect agreement mismatches reflects a violated expectation concerning verbal inflection (in the example above, the expected verb phrase was Sophie is X-ing rather than Sophie X-s in condition d). The absence of an additional P600 for aspect processing suggests that the mismatch did not require additional reintegration or processing costs. This is consistent with participants’ post hoc grammaticality judgements of the same sentences, which overall show a high acceptability of aspect mismatch sentences.

    Additional information

    data sheet 1.docx
  • Flecken, M., Athanasopoulos, P., Kuipers, J. R., & Thierry, G. (2015). On the road to somewhere: Brain potentials reflect language effects on motion event perception. Cognition, 141, 41-51. doi:10.1016/j.cognition.2015.04.006.

    Abstract

    Recent studies have identified neural correlates of language effects on perception in static domains of experience such as colour and objects. The generalization of such effects to dynamic domains like motion events remains elusive. Here, we focus on grammatical differences between languages relevant for the description of motion events and their impact on visual scene perception. Two groups of native speakers of German or English were presented with animated videos featuring a dot travelling along a trajectory towards a geometrical shape (endpoint). English is a language with grammatical aspect in which attention is drawn to trajectory and endpoint of motion events equally. German, in contrast, is a non-aspect language which highlights endpoints. We tested the comparative perceptual saliency of trajectory and endpoint of motion events by presenting motion event animations (primes) followed by a picture symbolising the event (target): In 75% of trials, the animation was followed by a mismatching picture (both trajectory and endpoint were different); in 10% of trials, only the trajectory depicted in the picture matched the prime; in 10% of trials, only the endpoint matched the prime; and in 5% of trials both trajectory and endpoint were matching, which was the condition requiring a response from the participant. In Experiment 1 we recorded event-related brain potentials elicited by the picture in native speakers of German and native speakers of English. German participants exhibited a larger P3 wave in the endpoint match than the trajectory match condition, whereas English speakers showed no P3 amplitude difference between conditions. In Experiment 2 participants performed a behavioural motion matching task using the same stimuli as those used in Experiment 1. German and English participants did not differ in response times showing that motion event verbalisation cannot readily account for the difference in P3 amplitude found in the first experiment. We argue that, even in a non-verbal context, the grammatical properties of the native language and associated sentence-level patterns of event encoding influence motion event perception, such that attention is automatically drawn towards aspects highlighted by the grammar.
  • Francken, J. C., Meijs, E. L., Ridderinkhof, O. M., Hagoort, P., de Lange, F. P., & van Gaal, S. (2015). Manipulating word awareness dissociates feed-forward from feedback models of language-perception interactions. Neuroscience of consciousness, 1. doi:10.1093/nc/niv003.

    Abstract

    Previous studies suggest that linguistic material can modulate visual perception, but it is unclear at which level of processing these interactions occur. Here we aim to dissociate between two competing models of language–perception interactions: a feed-forward and a feedback model. We capitalized on the fact that the models make different predictions on the role of feedback. We presented unmasked (aware) or masked (unaware) words implying motion (e.g. “rise,” “fall”), directly preceding an upward or downward visual motion stimulus. Crucially, masking leaves intact feed-forward information processing from low- to high-level regions, whereas it abolishes subsequent feedback. Under this condition, participants remained faster and more accurate when the direction implied by the motion word was congruent with the direction of the visual motion stimulus. This suggests that language–perception interactions are driven by the feed-forward convergence of linguistic and perceptual information at higher-level conceptual and decision stages.
  • Francken, J. C., Meijs, E. L., Hagoort, P., van Gaal, S., & de Lange, F. P. (2015). Exploring the automaticity of language-perception interactions: Effects of attention and awareness. Scientific Reports, 5: 17725. doi:10.1038/srep17725.

    Abstract

    Previous studies have shown that language can modulate visual perception, by biasing and/
    or enhancing perceptual performance. However, it is still debated where in the brain visual and
    linguistic information are integrated, and whether the effects of language on perception are
    automatic and persist even in the absence of awareness of the linguistic material. Here, we aimed
    to explore the automaticity of language-perception interactions and the neural loci of these
    interactions in an fMRI study. Participants engaged in a visual motion discrimination task (upward
    or downward moving dots). Before each trial, a word prime was briefly presented that implied
    upward or downward motion (e.g., “rise”, “fall”). These word primes strongly influenced behavior:
    congruent motion words sped up reaction times and improved performance relative to incongruent
    motion words. Neural congruency effects were only observed in the left middle temporal gyrus,
    showing higher activity for congruent compared to incongruent conditions. This suggests that higherlevel
    conceptual areas rather than sensory areas are the locus of language-perception interactions.
    When motion words were rendered unaware by means of masking, they still affected visual motion
    perception, suggesting that language-perception interactions may rely on automatic feed-forward
    integration of perceptual and semantic material in language areas of the brain.
  • Francken, J. C., Kok, P., Hagoort, P., & De Lange, F. P. (2015). The behavioral and neural effects of language on motion perception. Journal of Cognitive Neuroscience, 27(1), 175-184. doi:10.1162/jocn_a_00682.

    Abstract

    Perception does not function as an isolated module but is tightly linked with other cognitive functions. Several studies have demonstrated an influence of language on motion perception, but it remains debated at which level of processing this modulation takes place. Some studies argue for an interaction in perceptual areas, but it is also possible that the interaction is mediated by "language areas" that integrate linguistic and visual information. Here, we investigated whether language-perception interactions were specific to the language-dominant left hemisphere by comparing the effects of language on visual material presented in the right (RVF) and left visual fields (LVF). Furthermore, we determined the neural locus of the interaction using fMRI. Participants performed a visual motion detection task. On each trial, the visual motion stimulus was presented in either the LVF or in the RVF, preceded by a centrally presented word (e.g., "rise"). The word could be congruent, incongruent, or neutral with regard to the direction of the visual motion stimulus that was presented subsequently. Participants were faster and more accurate when the direction implied by the motion word was congruent with the direction of the visual motion stimulus. Interestingly, the speed benefit was present only for motion stimuli that were presented in the RVF. We observed a neural counterpart of the behavioral facilitation effects in the left middle temporal gyrus, an area involved in semantic processing of verbal material. Together, our results suggest that semantic information about motion retrieved in language regions may automatically modulate perceptual decisions about motion.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing the link between speech perception and production through individual differences. In Proceedings of the 18th International Congress of Phonetic Sciences. Glasgow: the University of Glasgow.

    Abstract

    This study aims to test a prediction of recent
    theoretical frameworks in speech motor control: if speech production targets are specified in auditory
    terms, people with better auditory acuity should have more precise speech targets.
    To investigate this, we had participants perform speech perception and production tasks in a counterbalanced order. To assess speech perception acuity, we used an adaptive speech discrimination
    task. To assess variability in speech production, participants performed a pseudo-word reading task; formant values were measured for each recording.
    We predicted that speech production variability to correlate inversely with discrimination performance.
    The results suggest that people do vary in their production and perceptual abilities, and that better discriminators have more distinctive vowel production targets, confirming our prediction. This
    study highlights the importance of individual
    differences in the study of speech motor control, and sheds light on speech production-perception interaction.
  • Franken, M. K., Hagoort, P., & Acheson, D. J. (2015). Modulations of the auditory M100 in an Imitation Task. Brain and Language, 142, 18-23. doi:10.1016/j.bandl.2015.01.001.

    Abstract

    Models of speech production explain event-related suppression of the auditory cortical
    response as reflecting a comparison between auditory predictions and feedback. The present MEG
    study was designed to test two predictions from this framework: 1) whether the reduced auditory
    response varies as a function of the mismatch between prediction and feedback; 2) whether individual
    variation in this response is predictive of speech-motor adaptation.
    Participants alternated between online imitation and listening tasks. In the imitation task, participants
    began each trial producing the same vowel (/e/) and subsequently listened to and imitated auditorilypresented
    vowels varying in acoustic distance from /e/.
    Results replicated suppression, with a smaller M100 during speaking than listening. Although we did
    not find unequivocal support for the first prediction, participants with less M100 suppression were
    better at the imitation task. These results are consistent with the enhancement of M100 serving as an
    error signal to drive subsequent speech-motor adaptation.
  • Guadalupe, T., Zwiers, M. P., Wittfeld, K., Teumer, A., Vasquez, A. A., Hoogman, M., Hagoort, P., Fernandez, G., Buitelaar, J., van Bokhoven, H., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2015). Asymmetry within and around the human planum temporale is sexually dimorphic and influenced by genes involved in steroid hormone receptor activity. Cortex, 62, 41-55. doi:10.1016/j.cortex.2014.07.015.

    Abstract

    The genetic determinants of cerebral asymmetries are unknown. Sex differences in asymmetry of the planum temporale, that overlaps Wernicke’s classical language area, have been inconsistently reported. Meta-analysis of previous studies has suggested that publication bias established this sex difference in the literature. Using probabilistic definitions of cortical regions we screened over the cerebral cortex for sexual dimorphisms of asymmetry in 2337 healthy subjects, and found the planum temporale to show the strongest sex-linked asymmetry of all regions, which was supported by two further datasets, and also by analysis with the Freesurfer package that performs automated parcellation of cerebral cortical regions. We performed a genome-wide association scan meta-analysis of planum temporale asymmetry in a pooled sample of 3095 subjects, followed by a candidate-driven approach which measured a significant enrichment of association in genes of the ´steroid hormone receptor activity´ and 'steroid metabolic process' pathways. Variants in the genes and pathways identified may affect the role of the planum temporale in language cognition.
  • Hagoort, P. (2015). Het talige brein. In A. Aleman, & H. E. Hulshoff Pol (Eds.), Beeldvorming van het brein: Imaging voor psychiaters en psychologen (pp. 169-176). Utrecht: De Tijdstroom.
  • Hagoort, P. (2015). Spiegelneuronen. In J. Brockmann (Ed.), Wetenschappelijk onkruid: 179 hardnekkige ideeën die vooruitgang blokkeren (pp. 455-457). Amsterdam: Maven Publishing.
  • Holler, J., Kokal, I., Toni, I., Hagoort, P., Kelly, S. D., & Ozyurek, A. (2015). Eye’m talking to you: Speakers’ gaze direction modulates co-speech gesture processing in the right MTG. Social Cognitive & Affective Neuroscience, 10, 255-261. doi:10.1093/scan/nsu047.

    Abstract

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture.
    Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that
    were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts.
  • Horschig, J. M., Smolders, R., Bonnefond, M., Schoffelen, J.-M., Van den Munckhof, P., Schuurman, P. R., Cools, R., Denys, D., & Jensen, O. (2015). Directed communication between nucleus accumbens and neocortex in humans is differentially supported by synchronization in the theta and alpha band. PLoS One, 10(9): e0138685. doi:10.1371/journal.pone.0138685.

    Abstract

    Here, we report evidence for oscillatory bi-directional interactions between the nucleus accumbens and the neocortex in humans. Six patients performed a demanding covert visual attention task while we simultaneously recorded brain activity from deep-brain electrodes implanted in the nucleus accumbens and the surface electroencephalogram (EEG). Both theta and alpha oscillations were strongly coherent with the frontal and parietal EEG during the task. Theta-band coherence increased during processing of the visual stimuli. Granger causality analysis revealed that the nucleus accumbens was communicating with the neocortex primarily in the theta-band, while the cortex was communicating the nucleus accumbens in the alpha-band. These data are consistent with a model, in which theta- and alpha-band oscillations serve dissociable roles: Prior to stimulus processing, the cortex might suppress ongoing processing in the nucleus accumbens by modulating alpha-band activity. Subsequently, upon stimulus presentation, theta oscillations might facilitate the active exchange of stimulus information from the nucleus accumbens to the cortex.
  • Jiang, J., Chen, C., Dai, B., Shi, G., Liu, L., & Lu, C. (2015). Leader emergence through interpersonal neural synchronization. Proceedings of the National Academy of Sciences of the United States of America, 112(14), 4274-4279. doi:10.1073/pnas.1422930112.

    Abstract

    The neural mechanism of leader emergence is not well understood. This study investigated (i) whether interpersonal neural synchronization (INS) plays an important role in leader emergence, and (ii) whether INS and leader emergence are associated with the frequency or the quality of communications. Eleven three-member groups were asked to perform a leaderless group discussion (LGD) task, and their brain activities were recorded via functional near infrared spectroscopy (fNIRS)-based hyperscanning. Video recordings of the discussions were coded for leadership and communication. Results showed that the INS for the leader–follower (LF) pairs was higher than that for the follower–follower (FF) pairs in the left temporo-parietal junction (TPJ), an area important for social mentalizing. Although communication frequency was higher for the LF pairs than for the FF pairs, the frequency of leader-initiated and follower-initiated communication did not differ significantly. Moreover, INS for the LF pairs was significantly higher during leader-initiated communication than during follower-initiated communications. In addition, INS for the LF pairs during leader-initiated communication was significantly correlated with the leaders’ communication skills and competence, but not their communication frequency. Finally, leadership could be successfully predicted based on INS as well as communication frequency early during the LGD (before half a minute into the task). In sum, this study found that leader emergence was characterized by high-level neural synchronization between the leader and followers and that the quality, rather than the frequency, of communications was associated with synchronization. These results suggest that leaders emerge because they are able to say the right things at the right time.
  • Kunert, R., & Slevc, L. R. (2015). A commentary on: “Neural overlap in processing music and speech”. Frontiers in Human Neuroscience, 9: 330. doi:10.3389/fnhum.2015.00330.
  • Kunert, R., Willems, R. M., Casasanto, D., Patel, A. D., & Hagoort, P. (2015). Music and language syntax interact in Broca’s Area: An fMRI study. PLoS One, 10(11): e0141069. doi:10.1371/journal.pone.0141069.

    Abstract

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
  • Lai, V. T., & Curran, T. (2015). Erratum to “ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors” [Brain Lang. 127 (3) (2013) 484–496]. Brain and Language, 149, 148-150. doi:10.1016/j.bandl.2014.11.001.
  • Lai, V. T., Willems, R. M., & Hagoort, P. (2015). Feel between the Lines: Implied emotion from combinatorial semantics. Journal of Cognitive Neuroscience, 27(8), 1528-1541. doi:10.1162/jocn_a_00798.

    Abstract

    This study investigated the brain regions for the comprehension of implied emotion in sentences. Participants read negative sentences without negative words, for example, “The boy fell asleep and never woke up again,” and their neutral counterparts “The boy stood up and grabbed his bag.” This kind of negative sentence allows us to examine implied emotion derived at the sentence level, without associative emotion coming from word retrieval. We found that implied emotion in sentences, relative to neutral sentences, led to activation in some emotion-related areas, including the medial prefrontal cortex, the amygdala, and the insula, as well as certain language-related areas, including the inferior frontal gyrus, which has been implicated in combinatorial processing. These results suggest that the emotional network involved in implied emotion is intricately related to the network for combinatorial processing in language, supporting the view that sentence meaning is more than simply concatenating the meanings of its lexical building blocks.
  • Lai, V. T., & Narasimhan, B. (2015). Verb representation and thinking-for-speaking effects in Spanish-English bilinguals. In R. G. De Almeida, & C. Manouilidou (Eds.), Cognitive science perspectives on verb representation and processing (pp. 235-256). Cham: Springer.

    Abstract

    Speakers of English habitually encode motion events using manner-of-motion verbs (e.g., spin, roll, slide) whereas Spanish speakers rely on path-of-motion verbs (e.g., enter, exit, approach). Here, we ask whether the language-specific verb representations used in encoding motion events induce different modes of “thinking-for-speaking” in Spanish–English bilinguals. That is, assuming that the verb encodes the most salient information in the clause, do bilinguals find the path of motion to be more salient than manner of motion if they had previously described the motion event using Spanish versus English? In our study, Spanish–English bilinguals described a set of target motion events in either English or Spanish and then participated in a nonlinguistic similarity judgment task in which they viewed the target motion events individually (e.g., a ball rolling into a cave) followed by two variants a “same-path” variant such as a ball sliding into a cave or a “same-manner” variant such as a ball rolling away from a cave). Participants had to select one of the two variants that they judged to be more similar to the target event: The event that shared the same path of motion as the target versus the one that shared the same manner of motion. Our findings show that bilingual speakers were more likely to classify two motion events as being similar if they shared the same path of motion and if they had previously described the target motion events in Spanish versus in English. Our study provides further evidence for the “thinking-for-speaking” hypothesis by demonstrating that bilingual speakers can flexibly shift between language-specific construals of the same event “on-the-fly.”
  • Lartseva, A., Dijkstra, T., & Buitelaar, J. (2015). Emotional language processing in Autism Spectrum Disorders: A systematic review. Frontiers in Human Neuroscience, 8: 991. doi:10.3389/fnhum.2014.00991.

    Abstract

    In his first description of Autism Spectrum Disorders (ASD), Kanner emphasized emotional impairments by characterizing children with ASD as indifferent to other people, self-absorbed, emotionally cold, distanced, and retracted. Thereafter, emotional impairments became regarded as part of the social impairments of ASD, and research mostly focused on understanding how individuals with ASD recognize visual expressions of emotions from faces and body postures. However, it still remains unclear how emotions are processed outside of the visual domain. This systematic review aims to fill this gap by focusing on impairments of emotional language processing in ASD.
    We systematically searched PubMed for papers published between 1990 and 2013 using standardized search terms. Studies show that people with ASD are able to correctly classify emotional language stimuli as emotionally positive or negative. However, processing of emotional language stimuli in ASD is associated with atypical patterns of attention and memory performance, as well as abnormal physiological and neural activity. Particularly, younger children with ASD have difficulties in acquiring and developing emotional concepts, and avoid using these in discourse. These emotional language impairments were not consistently associated with age, IQ, or level of development of language skills.
    We discuss how emotional language impairments fit with existing cognitive theories of ASD, such as central coherence, executive dysfunction, and weak Theory of Mind. We conclude that emotional impairments in ASD may be broader than just a mere consequence of social impairments, and should receive more attention in future research
  • Lewis, A. G., & Bastiaansen, M. C. M. (2015). A predictive coding framework for rapid neural dynamics during sentence-level language comprehension. Cortex, 68, 155-168. doi:10.1016/j.cortex.2015.02.014.

    Abstract

    There is a growing literature investigating the relationship between oscillatory neural dynamics measured using EEG and/or MEG, and sentence-level language comprehension. Recent proposals have suggested a strong link between predictive coding accounts of the hierarchical flow of information in the brain, and oscillatory neural dynamics in the beta and gamma frequency ranges. We propose that findings relating beta and gamma oscillations to sentence-level language comprehension might be unified under such a predictive coding account. Our suggestion is that oscillatory activity in the beta frequency range may reflect both the active maintenance of the current network configuration responsible for representing the sentence-level meaning under construction, and the top-down propagation of predictions to hierarchically lower processing levels based on that representation. In addition, we suggest that oscillatory activity in the low and middle gamma range reflect the matching of top-down predictions with bottom-up linguistic input, while evoked high gamma might reflect the propagation of bottom-up prediction errors to higher levels of the processing hierarchy. We also discuss some of the implications of this predictive coding framework, and we outline ideas for how these might be tested experimentally
  • Lewis, A. G., Wang, L., & Bastiaansen, M. C. M. (2015). Fast oscillatory dynamics during language comprehension: Unification versus maintenance and prediction? Brain and Language, 148, 51-63. doi:10.1016/j.bandl.2015.01.003.

    Abstract

    The role of neuronal oscillations during language comprehension is not yet well understood. In this paper we review and reinterpret the functional roles of beta- and gamma-band oscillatory activity during language comprehension at the sentence and discourse level. We discuss the evidence in favor of a role for beta and gamma in unification (the unification hypothesis), and in light of mounting evidence that cannot be accounted for under this hypothesis, we explore an alternative proposal linking beta and gamma oscillations to maintenance and prediction (respectively) during language comprehension. Our maintenance/prediction hypothesis is able to account for most of the findings that are currently available relating beta and gamma oscillations to language comprehension, and is in good agreement with other proposals about the roles of beta and gamma in domain-general cognitive processing. In conclusion we discuss proposals for further testing and comparing the prediction and unification hypotheses.
  • Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: A review of behavioural, developmental, and neuroimaging research into sound-symbolism. Frontiers in Psychology, 6: 1246. doi:10.3389/fpsyg.2015.01246.

    Abstract

    This review covers experimental approaches to sound-symbolism—from infants to adults, and from Sapir’s foundational studies to twenty-first century product naming. It synthesizes recent behavioral, developmental, and neuroimaging work into a systematic overview of the cross-modal correspondences that underpin iconic links between form and meaning. It also identifies open questions and opportunities, showing how the future course of experimental iconicity research can benefit from an integrated interdisciplinary perspective. Combining insights from psychology and neuroscience with evidence from natural languages provides us with opportunities for the experimental investigation of the role of sound-symbolism in language learning, language processing, and communication. The review finishes by describing how hypothesis-testing and model-building will help contribute to a cumulative science of sound-symbolism in human language.
  • Lockwood, G., & Tuomainen, J. (2015). Ideophones in Japanese modulate the P2 and late positive complex responses. Frontiers in Psychology, 6: 933. doi:10.3389/fpsyg.2015.00933.

    Abstract

    Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response and a sustained late positive complex in comparison to arbitrary adverbs. These results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of ideophones in comparison to arbitrary words. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds, and that these effects are detectable in natural language.
  • Moreno, I., De Vega, M., León, I., Bastiaansen, M. C. M., Lewis, A. G., & Magyari, L. (2015). Brain dynamics in the comprehension of action-related language. A time-frequency analysis of mu rhythms. Neuroimage, 109, 50-62. doi:10.1016/j.neuroimage.2015.01.018.

    Abstract

    EEG mu rhythms (8-13Hz) recorded at fronto-central electrodes are generally considered as markers of motor cortical activity in humans, because they are modulated when participants perform an action, when they observe another’s action or even when they imagine performing an action. In this study, we analyzed the time-frequency (TF) modulation of mu rhythms while participants read action language (“You will cut the strawberry cake”), abstract language (“You will doubt the patient´s argument”), and perceptive language (“You will notice the bright day”). The results indicated that mu suppression at fronto-central sites is associated with action language rather than with abstract or perceptive language. Also, the largest difference between conditions occurred quite late in the sentence, while reading the first noun, (contrast Action vs. Abstract), or the second noun following the action verb (contrast Action vs. Perceptive). This suggests that motor activation is associated with the integration of words across the sentence beyond the lexical processing of the action verb. Source reconstruction localized mu suppression associated with action sentences in premotor cortex (BA 6). The present study suggests (1) that the understanding of action language activates motor networks in the human brain, and (2) that this activation occurs online based on semantic integration across multiple words in the sentence.
  • Nijhoff, A. D., & Willems, R. M. (2015). Simulating fiction: Individual differences in literature comprehension revealed with fMRI. PLoS One, 10(2): e0116492. doi:10.1371/journal.pone.0116492.

    Abstract

    When we read literary fiction, we are transported to fictional places, and we feel and think along with the characters. Despite the importance of narrative in adult life and during development, the neurocognitive mechanisms underlying fiction comprehension are unclear. We used functional magnetic resonance imaging (fMRI) to investigate how individuals differently employ neural networks important for understanding others’ beliefs and intentions (mentalizing), and for sensori-motor simulation while listening to excerpts from literary novels. Localizer tasks were used to localize both the cortical motor network and the mentalizing network in participants after they listened to excerpts from literary novels. Results show that participants who had high activation in anterior medial prefrontal cortex (aMPFC; part of the mentalizing network) when listening to mentalizing content of literary fiction, had lower motor cortex activity when they listened to action-related content of the story, and vice versa. This qualifies how people differ in their engagement with fiction: some people are mostly drawn into a story by mentalizing about the thoughts and beliefs of others, whereas others engage in literature by simulating more concrete events such as actions. This study provides on-line neural evidence for the existence of qualitatively different styles of moving into literary worlds, and adds to a growing body of literature showing the potential to study narrative comprehension with neuroimaging methods.
  • Peeters, D. (2015). A social and neurobiological approach to pointing in speech and gesture. PhD Thesis, Radboud University, Nijmegen.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech. Journal of Cognitive Neuroscience, 27(12), 2352-2368. doi:10.1162/jocn_a_00865.

    Abstract

    In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.
  • Peeters, D., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition, 136, 64-84. doi:10.1016/j.cognition.2014.10.010.

    Abstract

    A fundamental property of language is that it can be used to refer to entities in the extra-linguistic physical context of a conversation in order to establish a joint focus of attention on a referent. Typological and psycholinguistic work across a wide range of languages has put forward at least two different theoretical views on demonstrative reference. Here we contrasted and tested these two accounts by investigating the electrophysiological brain activity underlying the construction of indexical meaning in comprehension. In two EEG experiments, participants watched pictures of a speaker who referred to one of two objects using speech and an index-finger pointing gesture. In contrast with separately collected native speakers’ linguistic intuitions, N400 effects showed a preference for a proximal demonstrative when speaker and addressee were in a face-to-face orientation and all possible referents were located in the shared space between them, irrespective of the physical proximity of the referent to the speaker. These findings reject egocentric proximity-based accounts of demonstrative reference, support a sociocentric approach to deixis, suggest that interlocutors construe a shared space during conversation, and imply that the psychological proximity of a referent may be more important than its physical proximity.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal Gyrus in the integration of point- ing gestures and speech. In G. Ferré, & M. Tutton (Eds.), Proceedings of the4th GESPIN - Gesture & Speech in Interaction Conference. Nantes: Université de Nantes.

    Abstract

    Comprehension of pointing gestures is fundamental to human communication. However, the neural mechanisms
    that subserve the integration of pointing gestures and speech in visual contexts in comprehension
    are unclear. Here we present the results of an fMRI study in which participants watched images of an
    actor pointing at an object while they listened to her referential speech. The use of a mismatch paradigm
    revealed that the semantic unication of pointing gesture and speech in a triadic context recruits left
    inferior frontal gyrus. Complementing previous ndings, this suggests that left inferior frontal gyrus
    semantically integrates information across modalities and semiotic domains.
  • Samur, D., Lai, V. T., Hagoort, P., & Willems, R. M. (2015). Emotional context modulates embodied metaphor comprehension. Neuropsychologia, 78, 108-114. doi:10.1016/j.neuropsychologia.2015.10.003.

    Abstract

    Emotions are often expressed metaphorically, and both emotion and metaphor are ways through which abstract meaning can be grounded in language. Here we investigate specifically whether motion-related verbs when used metaphorically are differentially sensitive to a preceding emotional context, as compared to when they are used in a literal manner. Participants read stories that ended with ambiguous action/motion sentences (e.g., he got it), in which the action/motion could be interpreted metaphorically (he understood the idea) or literally (he caught the ball) depending on the preceding story. Orthogonal to the metaphorical manipulation, the stories were high or low in emotional content. The results showed that emotional context modulated the neural response in visual motion areas to the metaphorical interpretation of the sentences, but not to their literal interpretations. In addition, literal interpretations of the target sentences led to stronger activation in the visual motion areas as compared to metaphorical readings of the sentences. We interpret our results as suggesting that emotional context specifically modulates mental simulation during metaphor processing
  • Simanova, I., Van Gerven, M. A., Oostenveld, R., & Hagoort, P. (2015). Predicting the semantic category of internally generated words from neuromagnetic recordings. Journal of Cognitive Neuroscience, 27(1), 35-45. doi:10.1162/jocn_a_00690.

    Abstract

    In this study, we explore the possibility to predict the semantic category of words from brain signals in a free word generation task. Participants produced single words from different semantic categories in a modified semantic fluency task. A Bayesian logistic regression classifier was trained to predict the semantic category of words from single-trial MEG data. Significant classification accuracies were achieved using sensor-level MEG time series at the time interval of conceptual preparation. Semantic category prediction was also possible using source-reconstructed time series, based on minimum norm estimates of cortical activity. Brain regions that contributed most to classification on the source level were identified. These were the left inferior frontal gyrus, left middle frontal gyrus, and left posterior middle temporal gyrus. Additionally, the temporal dynamics of brain activity underlying the semantic preparation during word generation was explored. These results provide important insights about central aspects of language production
  • Todorovic, A., Schoffelen, J.-M., van Ede, F., Maris, E., & de Lange, F. P. (2015). Temporal expectation and attention jointly modulate auditory oscillatory activity in the beta band. PLoS One, 10(3): e0120288. doi:10.1371/journal.pone.0120288.

    Abstract

    The neural response to a stimulus is influenced by endogenous factors such as expectation and attention. Current research suggests that expectation and attention exert their effects in opposite directions, where expectation decreases neural activity in sensory areas, while attention increases it. However, expectation and attention are usually studied either in isolation or confounded with each other. A recent study suggests that expectation and attention may act jointly on sensory processing, by increasing the neural response to expected events when they are attended, but decreasing it when they are unattended. Here we test this hypothesis in an auditory temporal cueing paradigm using magnetoencephalography in humans. In our study participants attended to, or away from, tones that could arrive at expected or unexpected moments. We found a decrease in auditory beta band synchrony to expected (versus unexpected) tones if they were unattended, but no difference if they were attended. Modulations in beta power were already evident prior to the expected onset times of the tones. These findings suggest that expectation and attention jointly modulate sensory processing.
  • Udden, J., & Schoffelen, J.-M. (2015). Mother of all Unification Studies (MOUS). In A. E. Konopka (Ed.), Research Report 2013 | 2014 (pp. 21-22). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2236748.
  • Van den Bos, E., & Poletiek, F. H. (2015). Learning simple and complex artificial grammars in the presence of a semantic reference field: Effects on performance and awareness. Frontiers in Psychology, 6: 158. doi:10.3389/fpsyg.2015.00158.

    Abstract

    This study investigated whether the negative effect of complexity on artificial grammar learning could be compensated by adding semantics. Participants were exposed to exemplars from a simple or a complex finite state grammar presented with or without a semantic reference field. As expected, performance on a grammaticality judgment test was higher for the simple grammar than for the complex grammar. For the simple grammar, the results also showed that participants presented with a reference field and instructed to decode the meaning of each exemplar (decoding condition) did better than participants who memorized the exemplars without semantic referents (memorize condition). Contrary to expectations, however, there was no significant difference between the decoding condition and the memorize condition for the complex grammar. These findings indicated that the negative effect of complexity remained, despite the addition of semantics. To clarify how the presence of a reference field influenced the learning process, its effects on the acquisition of two types of knowledge (first- and second-order dependencies) and on participants’ awareness of their knowledge were examined. The results tentatively suggested that the reference field enhanced the learning of second-order dependencies. In addition, participants in the decoding condition realized when they had knowledge relevant to making a grammaticality judgment, whereas participants in the memorize condition demonstrated some knowledge of which they were unaware. These results are in line with the view that the reference field enhanced structure learning by making certain dependencies more salient. Moreover, our findings stress the influence of complexity on artificial grammar learning

    Additional information

    data sheet 1.pdf
  • Veenstra, A., Meyer, A. S., & Acheson, D. J. (2015). Effects of parallel planning on agreement production. Acta Psychologica, 162, 29-39. doi:10.1016/j.actpsy.2015.09.011.

    Abstract

    An important issue in current psycholinguistics is how the time course of utterance planning affects the generation of grammatical structures. The current study investigated the influence of parallel activation of the components of complex noun phrases on the generation of subject-verb agreement. Specifically, the lexical interference account (Gillespie, M. and Pearlmutter, N. J., 2011b and Solomon, E. S. and Pearlmutter, N. J., 2004) predicts more agreement errors (i.e., attraction) for subject phrases in which the head and local noun mismatch in number (e.g., the apple next to the pears) when nouns are planned in parallel than when they are planned in sequence. We used a speeded picture description task that yielded sentences such as the apple next to the pears is red. The objects mentioned in the noun phrase were either semantically related or unrelated. To induce agreement errors, pictures sometimes mismatched in number. In order to manipulate the likelihood of parallel processing of the objects and to test the hypothesized relationship between parallel processing and the rate of agreement errors, the pictures were either placed close together or far apart. Analyses of the participants' eye movements and speech onset latencies indicated slower processing of the first object and stronger interference from the related (compared to the unrelated) second object in the close than in the far condition. Analyses of the agreement errors yielded an attraction effect, with more errors in mismatching than in matching conditions. However, the magnitude of the attraction effect did not differ across the close and far conditions. Thus, spatial proximity encouraged parallel processing of the pictures, which led to interference of the associated conceptual and/or lexical representation, but, contrary to the prediction, it did not lead to more attraction errors.

Share this page