Publications

Displaying 301 - 400 of 1045
  • Goncharova, M. V., & Klenova, A. V. (2019). Siberian crane chick calls reflect their thermal state. Bioacoustics, 28, 115-128. doi:10.1080/09524622.2017.1399827.

    Abstract

    Chicks can convey information about their needs with calls. But it is still unknown if there are any universal need indicators in chick vocalizations. Previous studies have shown that in some species vocal activity and/or temporal-frequency variables of calls are related to the chick state, whereas other studies did not confirm it. Here, we tested experimentally whether vocal activity and temporal-frequency variables of calls change with cooling. We studied 10 human-raised
    Siberian crane (Grus leucogeranus) chicks at 3–15 days of age. We found that the cooled chicks produced calls higher in fundamental
    frequency and power variables, longer in duration and at a higher calling rate than in the control chicks. However, we did not find
    significant changes in level of entropy and occurrence of non-linear phenomena in chick calls recorded during the experimental cooling. We suggest that the level of vocal activity is a universal indicator of need for warmth in precocial and semi-precocial birds (e.g. cranes), but not in altricial ones. We also assume that coding of needs via temporal-frequency variables of calls is typical in species whose adults could not confuse their chicks with other chicks. Siberian cranes stay on separate territories during their breeding season, so parents do not need to check individuality of their offspring in the home area. In this case, all call characteristics are available for other purposes and serve to communicate chicks’ vital needs.
  • Goodhew, S. C., Reynolds, K., Edwards, M., & Kidd, E. (2022). The content of gender stereotypes embedded in language use. Journal of Language and Social Psychology, 41(2), 219-231. doi:10.1177/0261927X211033930.

    Abstract

    Gender stereotypes have endured despite substantial change in gender roles. Previous work has assessed how gender stereotypes affect language production in particular interactional contexts. Here, we assessed communication biases where context was less specified: written texts to diffuse audiences. We used Latent Semantic Analysis (LSA) to computationally quantify the similarity in meaning between gendered names and stereotype-linked terms in these communications. This revealed that female names were more similar in meaning to the proscriptive (undesirable) masculine terms, such as emotional.
  • Gordon, J. K., & Clough, S. (2022). How do clinicians judge fluency in aphasia? Journal of Speech, Language, and Hearing Research, 65(4), 1521-1542. doi:10.1044/2021_JSLHR-21-00484.

    Abstract

    Purpose: Aphasia fluency is multiply determined by underlying impairments in lexical retrieval, grammatical formulation, and speech production. This poses challenges for establishing a reliable and feasible tool to measure fluency in the clinic. We examine the reliability and validity of perceptual ratings and clinical perspectives on the utility and relevance of methods used to assess fluency.
    Method: In an online survey, 112 speech-language pathologists rated spontaneous speech samples from 181 people with aphasia (PwA) on eight perceptual rating scales (overall fluency, speech rate, pausing, effort, melody, phrase length, grammaticality, and lexical retrieval) and answered questions about their current practices for assessing fluency in the clinic.
    Results: Interrater reliability for the eight perceptual rating scales ranged from fair to good. The most reliable scales were speech rate, pausing, and phrase length. Similarly, clinicians' perceived fluency ratings were most strongly correlated to objective measures of speech rate and utterance length but were also related to grammatical complexity, lexical diversity, and phonological errors. Clinicians' ratings reflected expected aphasia subtype patterns: Individuals with Broca's and transcortical motor aphasia were rated below average on fluency, whereas those with anomic, conduction, and Wernicke's aphasia were rated above average. Most respondents reported using multiple methods in the clinic to measure fluency but relying most frequently on subjective judgments.
    Conclusions: This study lends support for the use of perceptual rating scales as valid assessments of speech-language production but highlights the need for a more reliable method for clinical use. We describe next steps for developing such a tool that is clinically feasible and helps to identify the underlying deficits disrupting fluency to inform treatment targets.
  • Goriot, C. (2019). Early-English education works no miracles: Cognitive and linguistic development in mainstream, early-English, and bilingual primary-school pupils in the Netherlands. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Gretsch, P. (2003). Omission impossible?: Topic and Focus in Focal Ellipsis. In K. Schwabe, & S. Winkler (Eds.), The Interfaces: Deriving and interpreting omitted structures (pp. 341-365). Amsterdam: John Benjamins.
  • Grey, S., Schubel, L. C., McQueen, J. M., & Van Hell, J. G. (2019). Processing foreign-accented speech in a second language: Evidence from ERPs during sentence comprehension in bilinguals. Bilingualism: Language and Cognition, 22(5), 912-929. doi:10.1017/S1366728918000937.

    Abstract

    This study examined electrophysiological correlates of sentence comprehension of native-accented and foreign-accented
    speech in a second language (L2), for sentences produced in a foreign accent different from that associated with the listeners’
    L1. Bilingual speaker-listeners process different accents in their L2 conversations, but the effects on real-time L2 sentence
    comprehension are unknown. Dutch–English bilinguals listened to native American-English accented sentences and foreign
    (and for them unfamiliarly-accented) Chinese-English accented sentences while EEG was recorded. Behavioral sentence
    comprehension was highly accurate for both native-accented and foreign-accented sentences. ERPs showed different patterns
    for L2 grammar and semantic processing of native- and foreign-accented speech. For grammar, only native-accented speech
    elicited an Nref. For semantics, both native- and foreign-accented speech elicited an N400 effect, but with a delayed onset
    across both accent conditions. These findings suggest that the way listeners comprehend native- and foreign-accented
    sentences in their L2 depends on their familiarity with the accent.
  • Grove, J., Ripke, S., Als, T. D., Mattheisen, M., Walters, R., Won, H., Pallesen, J., Agerbo, E., Andreassen, O. A., Anney, R., Belliveau, R., Bettella, F., Buxbaum, J. D., Bybjerg-Grauholm, J., Bækved-Hansen, M., Cerrato, F., Chambert, K., Christensen, J. H., Churchhouse, C., Dellenvall, K. and 55 moreGrove, J., Ripke, S., Als, T. D., Mattheisen, M., Walters, R., Won, H., Pallesen, J., Agerbo, E., Andreassen, O. A., Anney, R., Belliveau, R., Bettella, F., Buxbaum, J. D., Bybjerg-Grauholm, J., Bækved-Hansen, M., Cerrato, F., Chambert, K., Christensen, J. H., Churchhouse, C., Dellenvall, K., Demontis, D., De Rubeis, S., Devlin, B., Djurovic, S., Dumont, A., Goldstein, J., Hansen, C. S., Hauberg, M. E., Hollegaard, M. V., Hope, S., Howrigan, D. P., Huang, H., Hultman, C., Klei, L., Maller, J., Martin, J., Martin, A. R., Moran, J., Nyegaard, M., Nærland, T., Palmer, D. S., Palotie, A., Pedersen, C. B., Pedersen, M. G., Poterba, T., Poulsen, J. B., St Pourcain, B., Qvist, P., Rehnström, K., Reichenberg, A., Reichert, J., Robinson, E. B., Roeder, K., Roussos, P., Saemundsen, E., Sandin, S., Satterstrom, F. K., Smith, G. D., Stefansson, H., Stefansson, K., Steinberg, S., Stevens, C., Sullivan, P. F., Turley, P., Walters, G. B., Xu, X., Autism Spectrum Disorders Working Group of The Psychiatric Genomics Consortium, BUPGEN, Major Depressive Disorder Working Group of the Psychiatric Genomics Consortium, Me Research Team, Geschwind, D., Nordentoft, M., Hougaard, D. M., Werge, T., Mors, O., Mortensen, P. B., Neale, B. M., Daly, M. J., & Børglum, A. D. (2019). Identification of common genetic risk variants for autism spectrum disorder. Nature Genetics, 51, 431-444. doi:10.1038/s41588-019-0344-8.

    Abstract

    Autism spectrum disorder (ASD) is a highly heritable and heterogeneous group of neurodevelopmental phenotypes diagnosed in more than 1% of children. Common genetic variants contribute substantially to ASD susceptibility, but to date no individual variants have been robustly associated with ASD. With a marked sample-size increase from a unique Danish population resource, we report a genome-wide association meta-analysis of 18,381 individuals with ASD and 27,969 controls that identified five genome-wide-significant loci. Leveraging GWAS results from three phenotypes with significantly overlapping genetic architectures (schizophrenia, major depression, and educational attainment), we identified seven additional loci shared with other traits at equally strict significance levels. Dissecting the polygenic architecture, we found both quantitative and qualitative polygenic heterogeneity across ASD subtypes. These results highlight biological insights, particularly relating to neuronal function and corticogenesis, and establish that GWAS performed at scale will be much more productive in the near term in ASD.

    Additional information

    Supplementary Text and Figures
  • Guadalupe, T., Kong, X., Akkermans, S. E. A., Fisher, S. E., & Francks, C. (2022). Relations between hemispheric asymmetries of grey matter and auditory processing of spoken syllables in 281 healthy adults. Brain Structure & Function, 227, 561-572. doi:10.1007/s00429-021-02220-z.

    Abstract

    Most people have a right-ear advantage for the perception of spoken syllables, consistent with left hemisphere dominance for speech processing. However, there is considerable variation, with some people showing left-ear advantage. The extent to which this variation is reflected in brain structure remains unclear. We tested for relations between hemispheric asymmetries of auditory processing and of grey matter in 281 adults, using dichotic listening and voxel-based morphometry. This was the largest study of this issue to date. Per-voxel asymmetry indexes were derived for each participant following registration of brain magnetic resonance images to a template that was symmetrized. The asymmetry index derived from dichotic listening was related to grey matter asymmetry in clusters of voxels corresponding to the amygdala and cerebellum lobule VI. There was also a smaller, non-significant cluster in the posterior superior temporal gyrus, a region of auditory cortex. These findings contribute to the mapping of asymmetrical structure–function links in the human brain and suggest that subcortical structures should be investigated in relation to hemispheric dominance for speech processing, in addition to auditory cortex.

    Additional information

    supplementary information
  • Le Guen, O. (2003). Quand les morts reviennent, réflexion sur l'ancestralité chez les Mayas des Basses Terres. Journal de la Société des Américanistes, 89(2), 171-205.

    Abstract

    When the dead come home… Remarks on ancestor worship among the Lowland Mayas. In Amerindian ethnographical literature, ancestor worship is often mentioned but evidence of its existence is lacking. This article will try to demonstrate that some Lowland Maya do worship ancestors ; it will use precise criteria taken from ethnological studies of societies where ancestor worship is common, compared to maya beliefs and practices. The All Souls’ Day, or hanal pixan, seems to be the most significant manifestation of this cult. Our approach will be comparative, through time – using colonial and ethnographical data of the twentieth century, and space – contemplating uses and beliefs of two maya groups, the Yucatec and the Lacandon Maya.
  • Guest, O., Kanayet, F. J., & Love, B. C. (2019). Gerrymandering and computational redistricting. Journal of Computational Social Science, 2, 119-131. doi:10.1007/s42001-019-00053-9.

    Abstract

    Partisan gerrymandering poses a threat to democracy. Moreover, the complexity of the districting task may exceed human capacities. One potential solution is using computational models to automate the districting process by optimizing objective and open criteria, such as how spatially compact districts are. We formulated one such model that minimised pairwise distance between voters within a district. Using US Census Bureau data, we confirmed our prediction that the difference in compactness between the computed and actual districts would be greatest for states that are large and, therefore, difficult for humans to properly district given their limited capacities. The computed solutions highlighted differences in how humans and machines solve this task with machine solutions more fully optimised and displaying emergent properties not evident in human solutions. These results suggest a division of labour in which humans debate and formulate districting criteria whereas machines optimise the criteria to draw the district boundaries. We discuss how criteria can be expanded beyond notions of compactness to include other factors, such as respecting municipal boundaries, historic communities, and relevant legislation.
  • Gullberg, M. (2003). Eye movements and gestures in human face-to-face interaction. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind's eyes: Cognitive and applied aspects of eye movements (pp. 685-703). Oxford: Elsevier.

    Abstract

    Gestures are visuospatial events, meaning carriers, and social interactional phenomena. As such they constitute a particularly favourable area for investigating visual attention in a complex everyday situation under conditions of competitive processing. This chapter discusses visual attention to spontaneous gestures in human face-to-face interaction as explored with eye-tracking. Some basic fixation patterns are described, live and video-based settings are compared, and preliminary results on the relationship between fixations and information processing are outlined.
  • Gullberg, M., & Kita, S. (2003). Das Beachten von Gesten: Eine Studie zu Blickverhalten und Integration gestisch ausgedrückter Informationen. In Max-Planck-Gesellschaft (Ed.), Jahrbuch der Max Planck Gesellschaft 2003 (pp. 949-953). Göttingen: Vandenhoeck & Ruprecht.
  • Gullberg, M. (2003). Gestures, referents, and anaphoric linkage in learner varieties. In C. Dimroth, & M. Starren (Eds.), Information structure, linguistic structure and the dynamics of language acquisition. (pp. 311-328). Amsterdam: Benjamins.

    Abstract

    This paper discusses how the gestural modality can contribute to our understanding of anaphoric linkage in learner varieties, focusing on gestural anaphoric linkage marking the introduction, maintenance, and shift of reference in story retellings by learners of French and Swedish. The comparison of gestural anaphoric linkage in native and non-native varieties reveals what appears to be a particular learner variety of gestural cohesion, which closely reflects the characteristics of anaphoric linkage in learners' speech. Specifically, particular forms co-occur with anaphoric gestures depending on the information organisation in discourse. The typical nominal over-marking of maintained referents or topic elements in speech is mirrored by gestural (over-)marking of the same items. The paper discusses two ways in which this finding may further the understanding of anaphoric over-explicitness of learner varieties. An addressee-based communicative perspective on anaphoric linkage highlights how over-marking in gesture and speech may be related to issues of hyper-clarity and ambiguity. An alternative speaker-based perspective is also explored in which anaphoric over-marking is seen as related to L2 speech planning.
  • Gullberg, M., & Holmqvist, K. (1999). Keeping an eye on gestures: Visual perception of gestures in face-to-face communication. Pragmatics & Cognition, 7(1), 35-63. doi:10.1075/pc.7.1.04gul.

    Abstract

    Since listeners usually look at the speaker's face, gestural information has to be absorbed through peripheral visual perception. In the literature, it has been suggested that listeners look at gestures under certain circumstances: 1) when the articulation of the gesture is peripheral; 2) when the speech channel is insufficient for comprehension; and 3) when the speaker him- or herself indicates that the gesture is worthy of attention. The research here reported employs eye tracking techniques to study the perception of gestures in face-to-face interaction. The improved control over the listener's visual channel allows us to test the validity of the above claims. We present preliminary findings substantiating claims 1 and 3, and relate them to theoretical proposals in the literature and to the issue of how visual and cognitive attention are related.
  • Gunz, P., Tilot, A. K., Wittfeld, K., Teumer, A., Shapland, C. Y., Van Erp, T. G. M., Dannemann, M., Vernot, B., Neubauer, S., Guadalupe, T., Fernandez, G., Brunner, H., Enard, W., Fallon, J., Hosten, N., Völker, U., Profico, A., Di Vincenzo, F., Manzi, G., Kelso, J. and 7 moreGunz, P., Tilot, A. K., Wittfeld, K., Teumer, A., Shapland, C. Y., Van Erp, T. G. M., Dannemann, M., Vernot, B., Neubauer, S., Guadalupe, T., Fernandez, G., Brunner, H., Enard, W., Fallon, J., Hosten, N., Völker, U., Profico, A., Di Vincenzo, F., Manzi, G., Kelso, J., St Pourcain, B., Hublin, J.-J., Franke, B., Pääbo, S., Macciardi, F., Grabe, H. J., & Fisher, S. E. (2019). Neandertal introgression sheds light on modern human endocranial globularity. Current Biology, 29(1), 120-127. doi:10.1016/j.cub.2018.10.065.

    Abstract

    One of the features that distinguishes modern humans from our extinct relatives
    and ancestors is a globular shape of the braincase [1-4]. As the endocranium
    closely mirrors the outer shape of the brain, these differences might reflect
    altered neural architecture [4,5]. However, in the absence of fossil brain tissue the
    underlying neuroanatomical changes as well as their genetic bases remain
    elusive. To better understand the biological foundations of modern human
    endocranial shape, we turn to our closest extinct relatives, the Neandertals.
    Interbreeding between modern humans and Neandertals has resulted in
    introgressed fragments of Neandertal DNA in the genomes of present-day non-
    Africans [6,7]. Based on shape analyses of fossil skull endocasts, we derive a
    measure of endocranial globularity from structural magnetic resonance imaging
    (MRI) scans of thousands of modern humans, and study the effects of
    introgressed fragments of Neandertal DNA on this phenotype. We find that
    Neandertal alleles on chromosomes 1 and 18 are associated with reduced
    endocranial globularity. These alleles influence expression of two nearby genes,
    UBR4 and PHLPP1, which are involved in neurogenesis and myelination,
    respectively. Our findings show how integration of fossil skull data with archaic
    genomics and neuroimaging can suggest developmental mechanisms that may
    contribute to the unique modern human endocranial shape.

    Additional information

    mmc1.pdf mmc2.xlsx
  • Gur, C., & Sumer, B. (2022). Learning to introduce referents in narration is resilient to the effects of late sign language exposure. Sign Language & Linguistics, 25(2), 205-234. doi:10.1075/sll.21004.gur.

    Abstract

    The present study investigates the effects of late sign language exposure on narrative development in Turkish Sign Language (TİD) by focusing on the introductions of main characters and the linguistic strategies used in these introductions. We study these domains by comparing narrations produced by native and late signers in TİD. The results of our study reveal that late sign language exposure does not hinder the acquisition of linguistic devices to introduce main characters in narrations. Thus, their acquisition seems to be resilient to the effects of late language exposure. Our study further suggests that a two-year exposure to sign language facilitates the acquisition of these skills in signing children even in the case of late language exposure, thus providing further support for the importance of sign language exposure to develop linguistic skills for signing children.
  • Gussenhoven, C., Lu, Y.-A., Lee-Kim, S.-I., Liu, C., Rahmani, H., Riad, T., & Zora, H. (2022). The sequence recall task and lexicality of tone: Exploring tone “deafness”. Frontiers in Psychology, 13: 902569. doi:10.3389/fpsyg.2022.902569.

    Abstract

    Many perception and processing effects of the lexical status of tone have been found in behavioral, psycholinguistic, and neuroscientific research, often pitting varieties of tonal Chinese against non-tonal Germanic languages. While the linguistic and cognitive evidence for lexical tone is therefore beyond dispute, the word prosodic systems of many languages continue to escape the categorizations of typologists. One controversy concerns the existence of a typological class of “pitch accent languages,” another the underlying phonological nature of surface tone contrasts, which in some cases have been claimed to be metrical rather than tonal. We address the question whether the Sequence Recall Task (SRT), which has been shown to discriminate between languages with and without word stress, can distinguish languages with and without lexical tone. Using participants from non-tonal Indonesian, semi-tonal Swedish, and two varieties of tonal Mandarin, we ran SRTs with monosyllabic tonal contrasts to test the hypothesis that high performance in a tonal SRT indicates the lexical status of tone. An additional question concerned the extent to which accuracy scores depended on phonological and phonetic properties of a language’s tone system, like its complexity, the existence of an experimental contrast in a language’s phonology, and the phonetic salience of a contrast. The results suggest that a tonal SRT is not likely to discriminate between tonal and non-tonal languages within a typologically varied group, because of the effects of specific properties of their tone systems. Future research should therefore address the first hypothesis with participants from otherwise similar tonal and non-tonal varieties of the same language, where results from a tonal SRT may make a useful contribution to the typological debate on word prosody.

    Additional information

    also published as book chapter (2023)
  • Haagen, T., Dona, L., Bosscha, S., Zamith, B., Koetschruyter, R., & Wijnholds, G. (2022). Noun Phrase and Verb Phrase Ellipsis in Dutch: Identifying Subject-Verb Dependencies with BERTje. Computational Linguistics in the Netherlands Journal, 12, 49-63.

    Abstract

    Previous research has set out to quantify the syntactic capacity of BERTje (the Dutch equivalent of BERT) in the context of phenomena such as control verb nesting and verb raising in Dutch. Another complex language phenomenon is ellipsis, where a constituent is omitted from a sentence and can be recovered using context. Like verb raising and control verb nesting, ellipsis is suitable for evaluating BERTje’s linguistic capacity since it requires the processing of syntactic and lexical cues to recover the elided phrases. This work outlines an approach to identify subject-verb dependencies in Dutch sentences with verb phrase and noun phrase ellipsis using BERTje. Results will inform about BERTje’s capability of capturing syntactic information and its ability to capture ellipsis in particular. Understanding more about how computational models process ellipsis and how it can be improved is crucial for boosting the performance of language models, as natural language contains many instances of ellipsis. Using training data from Lassy, converted to contextualized embeddings using BERTje, a probe model is trained to identify subject-verb dependencies. The model is tested on sentences generated using a Context Free Grammar (CFG), which is designed to generate sentences containing ellipsis. These sentences are also converted to contextualized representations using BERTje. Results show that BERTje’s syntactic abilities are lacking, shown by accuracy drops compared to baseline measures.

    Additional information

    direct link to journal
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Syntax-related ERP-effects in Dutch. Cognitive Brain Research, 16(1), 38-50. doi:10.1016/S0926-6410(02)00208-2.

    Abstract

    In two studies subjects were required to read Dutch sentences that in some cases contained a syntactic violation, in other cases a semantic violation. All syntactic violations were word category violations. The design excluded differential contributions of expectancy to influence the syntactic violation effects. The syntactic violations elicited an Anterior Negativity between 300 and 500 ms. This negativity was bilateral and had a frontal distribution. Over posterior sites the same violations elicited a P600/SPS starting at about 600 ms. The semantic violations elicited an N400 effect. The topographic distribution of the AN was more frontal than the distribution of the classical N400 effect, indicating that the underlying generators of the AN and the N400 are, at least to a certain extent, non-overlapping. Experiment 2 partly replicated the design of Experiment 1, but with differences in rate of presentation and in the distribution of items over subjects, and without semantic violations. The word category violations resulted in the same effects as were observed in Experiment 1, showing that they were independent of some of the specific parameters of Experiment 1. The discussion presents a tentative account of the functional differences in the triggering conditions of the AN and the P600/SPS.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Real-time semantic compensation in patients with agrammatic comprehension: Electrophysiological evidence for multiple-route plasticity. Proceedings of the National Academy of Sciences of the United States of America, 100(7), 4340-4345. doi:10.1073/pnas.0230613100.

    Abstract

    To understand spoken language requires that the brain provides rapid access to different kinds of knowledge, including the sounds and meanings of words, and syntax. Syntax specifies constraints on combining words in a grammatically well formed manner. Agrammatic patients are deficient in their ability to use these constraints, due to a lesion in the perisylvian area of the languagedominant hemisphere. We report a study on real-time auditory sentence processing in agrammatic comprehenders, examining
    their ability to accommodate damage to the language system. We recorded event-related brain potentials (ERPs) in agrammatic comprehenders, nonagrammatic aphasics, and age-matched controls. When listening to sentences with grammatical violations, the agrammatic aphasics did not show the same syntax-related ERP effect as the two other subject groups. Instead, the waveforms of the agrammatic aphasics were dominated by a meaning-related ERP effect, presumably reflecting their attempts to achieve understanding by the use of semantic constraints. These data demonstrate that although agrammatic aphasics are impaired in their ability to exploit syntactic information in real time, they can reduce the consequences of a syntactic deficit by exploiting a semantic route. They thus provide evidence for the compensation of a syntactic deficit by a stronger reliance on another route in mapping
    sound onto meaning. This is a form of plasticity that we refer to as multiple-route plasticity.
  • Hagoort, P. (2022). Reasoning and the brain. In M. Stokhof, & K. Stenning (Eds.), Rules, regularities, randomness. Festschrift for Michiel van Lambalgen (pp. 83-85). Amsterdam: Institute for Logic, Language and Computation.
  • Hagoort, P. (1999). De toekomstige eeuw zonder psychologie. Psychologie Magazine, 18, 35-36.
  • Hagoort, P. (2003). De verloving tussen neurowetenschap en psychologie. In K. Hilberdink (Ed.), Interdisciplinariteit in de geesteswetenschappen (pp. 73-81). Amsterdam: KNAW.
  • Hagoort, P. (2003). Die einzigartige, grösstenteils aber unbewusste Fähigkeit der Menschen zu sprachlicher Kommunikation. In G. Kaiser (Ed.), Jahrbuch 2002-2003 / Wissenschaftszentrum Nordrhein-Westfalen (pp. 33-46). Düsseldorf: Wissenschaftszentrum Nordrhein-Westfalen.
  • Hagoort, P. (2003). Functional brain imaging. In W. J. Frawley (Ed.), International encyclopedia of linguistics (pp. 142-145). New York: Oxford University Press.
  • Hagoort, P. (2003). How the brain solves the binding problem for language: A neurocomputational model of syntactic processing. NeuroImage, 20(suppl. 1), S18-S29. doi:10.1016/j.neuroimage.2003.09.013.

    Abstract

    Syntax is one of the components in the architecture of language processing that allows the listener/reader to bind single-word information into a unified interpretation of multiword utterances. This paper discusses ERP effects that have been observed in relation to syntactic processing. The fact that these effects differ from the semantic N400 indicates that the brain honors the distinction between semantic and syntactic binding operations. Two models of syntactic processing attempt to account for syntax-related ERP effects. One type of model is serial, with a first phase that is purely syntactic in nature (syntax-first model). The other type of model is parallel and assumes that information immediately guides the interpretation process once it becomes available. This is referred to as the immediacy model. ERP evidence is presented in support of the latter model. Next, an explicit computational model is proposed to explain the ERP data. This Unification Model assumes that syntactic frames are stored in memory and retrieved on the basis of the spoken or written word form input. The syntactic frames associated with the individual lexical items are unified by a dynamic binding process into a structural representation that spans the whole utterance. On the basis of a meta-analysis of imaging studies on syntax, it is argued that the left posterior inferior frontal cortex is involved in binding syntactic frames together, whereas the left superior temporal cortex is involved in retrieval of the syntactic frames stored in memory. Lesion data that support the involvement of this left frontotemporal network in syntactic processing are discussed.
  • Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15(6), 883-899. doi:10.1162/089892903322370807.

    Abstract

    This study investigated the effects of combined semantic and syntactic violations in relation to the effects of single semantic and single syntactic violations on language-related event-related brain potential (ERP) effects (N400 and P600/ SPS). Syntactic violations consisted of a mismatch in grammatical gender or number features of the definite article and the noun in sentence-internal or sentence-final noun phrases (NPs). Semantic violations consisted of semantically implausible adjective–noun combinations in the same NPs. Combined syntactic and semantic violations were a summation of these two respective violation types. ERPs were recorded while subjects read the sentences with the different types of violations and the correct control sentences. ERP effects were computed relative to ERPs elicited by the sentence-internal or sentence-final nouns. The size of the N400 effect to the semantic violation was increased by an additional syntactic violation (the syntactic boost). In contrast, the size of the P600/ SPS to the syntactic violation was not affected by an additional semantic violation. This suggests that in the absence of syntactic ambiguity, the assignment of syntactic structure is independent of semantic context. However, semantic integration is influenced by syntactic processing. In the sentence-final position, additional global processing consequences were obtained as a result of earlier violations in the sentence. The resulting increase in the N400 amplitude to sentence-final words was independent of the nature of the violation. A speeded anomaly detection task revealed that it takes substantially longer to detect semantic than syntactic anomalies. These results are discussed in relation to the latency and processing characteristics of the N400 and P600/SPS effects. Overall, the results reveal an asymmetry in the interplay between syntax and semantics during on-line sentence comprehension.
  • Hagoort, P., & Brown, C. M. (1999). Gender electrified: ERP evidence on the syntactic nature of gender processing. Journal of Psycholinguistic Research, 28(6), 715-728. doi:10.1023/A:1023277213129.

    Abstract

    The central issue of this study concerns the claim that the processing of gender agreement in online sentence comprehension is a syntactic rather than a conceptual/semantic process. This claim was tested for the grammatical gender agreement in Dutch between the definite article and the noun. Subjects read sentences in which the definite article and the noun had the same gender and sentences in which the gender agreement was violated, While subjects read these sentences, their electrophysiological activity was recorded via electrodes placed on the scalp. Earlier research has shown that semantic and syntactic processing events manifest themselves in different event-related brain potential (ERP) effects. Semantic integration modulates the amplitude of the so-called N400.The P600/SPS is an ERP effect that is more sensitive to syntactic processes. The violation of grammatical gender agreement was found to result in a P600/SPS. For violations in sentence-final position, an additional increase of the N400 amplitude was observed. This N400 effect is interpreted as resulting from the consequence of a syntactic violation for the sentence-final wrap-up. The overall pattern of results supports the claim that the on-line processing of gender agreement information is not a content driven but a syntactic-form driven process.
  • Hagoort, P. (Ed.). (2019). Human language: From genes and brains to behavior. Cambridge, MA: MIT Press.
  • Hagoort, P., & Beckmann, C. F. (2019). Key issues and future directions: The neural architecture for language. In P. Hagoort (Ed.), Human language: From genes and brains to behavior (pp. 527-532). Cambridge, MA: MIT Press.
  • Hagoort, P. (2019). Introduction. In P. Hagoort (Ed.), Human language: From genes and brains to behavior (pp. 1-6). Cambridge, MA: MIT Press.
  • Hagoort, P., & Brown, C. M. (1999). The consequences of the temporal interaction between syntactic and semantic processes for haemodynamic studies of language. NeuroImage, 9, S1024-S1024.
  • Hagoort, P., Brown, C. M., & Osterhout, L. (1999). The neurocognition of syntactic processing. In C. M. Brown, & P. Hagoort (Eds.), The neurocognition of language (pp. 273-317). Oxford: Oxford University Press.
  • Hagoort, P., Ramsey, N., Rutten, G.-J., & Van Rijen, P. (1999). The role of the left anterior temporal cortex in language processing. Brain and Language, 69, 322-325. doi:10.1006/brln.1999.2169.
  • Hagoort, P. (2019). The meaning making mechanism(s) behind the eyes and between the ears. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20190301. doi:10.1098/rstb.2019.0301.

    Abstract

    In this contribution, the following four questions are discussed: (i) where is meaning?; (ii) what is meaning?; (iii) what is the meaning of mechanism?; (iv) what are the mechanisms of meaning? I will argue that meanings are in the head. Meanings have multiple facets, but minimally one needs to make a distinction between single word meanings (lexical meaning) and the meanings of multi-word utterances. The latter ones cannot be retrieved from memory, but need to be constructed on the fly. A mechanistic account of the meaning-making mind requires an analysis at both a functional and a neural level, the reason being that these levels are causally interdependent. I will show that an analysis exclusively focusing on patterns of brain activation lacks explanatory power. Finally, I shall present an initial sketch of how the dynamic interaction between temporo-parietal areas and inferior frontal cortex might instantiate the interpretation of linguistic utterances in the context of a multimodal setting and ongoing discourse information.
  • Hagoort, P., Indefrey, P., Brown, C. M., Herzog, H., Steinmetz, H., & Seitz, R. J. (1999). The neural circuitry involved in the reading of german words and pseudowords: A PET study. Journal of Cognitive Neuroscience, 11(4), 383-398. doi:10.1162/089892999563490.

    Abstract

    Silent reading and reading aloud of German words and pseudowords were used in a PET study using (15O)butanol to examine the neural correlates of reading and of the phonological conversion of legal letter strings, with or without meaning.
    The results of 11 healthy, right-handed volunteers in the age range of 25 to 30 years showed activation of the lingual gyri during silent reading in comparison with viewing a fixation cross. Comparisons between the reading of words and pseudowords suggest the involvement of the middle temporal gyri in retrieving both the phonological and semantic code for words. The reading of pseudowords activates the left inferior frontal gyrus, including the ventral part of Broca’s area, to a larger extent than the reading of words. This suggests that this area might be involved in the sublexical conversion of orthographic input strings into phonological output codes. (Pre)motor areas were found to be activated during both silent reading and reading aloud. On the basis of the obtained activation patterns, it is hypothesized that the articulation of high-frequency syllables requires the retrieval of their concomitant articulatory gestures from the SMA and that the articulation of lowfrequency syllables recruits the left medial premotor cortex.
  • Hagoort, P. (2019). The neurobiology of language beyond single word processing. Science, 366(6461), 55-58. doi:10.1126/science.aax0289.

    Abstract

    In this Review, I propose a multiple-network view for the neurobiological basis of distinctly human language skills. A much more complex picture of interacting brain areas emerges than in the classical neurobiological model of language. This is because using language is more than single-word processing, and much goes on beyond the information given in the acoustic or orthographic tokens that enter primary sensory cortices. This requires the involvement of multiple networks with functionally nonoverlapping contributions

    Files private

    Request files
  • Hagoort, P. (1999). The uniquely human capacity for language communication: from 'pope' to [po:p] in half a second. In J. Russell, M. Murphy, T. Meyering, & M. Arbib (Eds.), Neuroscience and the person: Scientific perspectives on divine action (pp. 45-56). California: Berkeley.
  • Hahn, L. E. (2022). Infants' perception of sound patterns in oral language play. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Hahn, L. E., Ten Buuren, M., De Nijs, M., Snijders, T. M., & Fikkert, P. (2019). Acquiring novel words in a second language through mutual play with child songs - The Noplica Energy Center. In L. Nijs, H. Van Regenmortel, & C. Arculus (Eds.), MERYC19 Counterpoints of the senses: Bodily experiences in musical learning (pp. 78-87). Ghent, Belgium: EuNet MERYC 2019.

    Abstract

    Child songs are a great source for linguistic learning. Here we explore whether children can acquire novel words in a second language by playing a game featuring child songs in a playhouse. We present data from three studies that serve as scientific proof for the functionality of one game of the playhouse: the Energy Center. For this game, three hand-bikes were mounted on a panel. When children start moving the hand-bikes, child songs start playing simultaneously. Once the children produce enough energy with the hand-bikes, the songs are additionally accompanied with the sounds of musical instruments. In our studies, children executed a picture-selection task to evaluate whether they acquired new vocabulary from the songs presented during the game. Two of our studies were run in the field, one at a Dutch and one at an Indian pre-school. The third study features data from a more controlled laboratory setting. Our results partly confirm that the Energy Center is a successful means to support vocabulary acquisition in a second language. More research with larger sample sizes and longer access to the Energy Center is needed to evaluate the overall functionality of the game. Based on informal observations at our test sites, however, we are certain that children do pick up linguistic content from the songs during play, as many of the children repeat words and phrases from songs they heard. We will pick up upon these promising observations during future studies
  • Hammarström, H. (2019). An inventory of Bantu languages. In M. Van de Velde, K. Bostoen, D. Nurse, & G. Philippson (Eds.), The Bantu languages (2nd). London: Routledge.

    Abstract

    This chapter aims to provide an updated list of all Bantu languages known at present and to provide individual pointers to further information on the inventory. The area division has some correlation with what are perceived genealogical relations between Bantu languages, but they are not defined as such and do not change whenever there is an update in our understanding of genealogical relations. Given the popularity of Guthrie codes in Bantu linguistics, our listing also features a complete mapping to Guthrie codes. The language inventory listed excludes sign languages used in the Bantu area, speech registers, pidgins, drummed/whistled languages and urban youth languages. Pointers to such languages in the Bantu area are included in the continent-wide overview in Hammarstrom. The most important alternative names, subvarieties and spelling variants are given for each language, though such lists are necessarily incomplete and reflect some degree of arbitrary selection.
  • Han, J.-I., & Verdonschot, R. G. (2019). Spoken-word production in Korean: A non-word masked priming and phonological Stroop task investigation. Quarterly Journal of Experimental Psychology, 72(4), 901-912. doi:10.1177/1747021818770989.

    Abstract

    Speech production studies have shown that phonological unit initially used to fill the metrical frame during phonological encoding is language specific, that is, a phoneme for English and Dutch, an atonal syllable for Mandarin Chinese, and a mora for Japanese. However, only a few studies chronometrically investigated speech production in Korean, and they obtained mixed results. Korean is particularly interesting as there might be both phonemic and syllabic influences during phonological encoding. The purpose of this study is to further examine the initial phonological preparation unit in Korean, employing a masked priming task (Experiment 1) and a phonological Stroop task (Experiment 2). The results showed that significant onset (and onset-plus, that is, consonant-vowel [CV]) effects were found in both experiments, but there was no compelling evidence for a prominent role for the syllable. When the prime words were presented in three different forms related to the targets, namely, without any change, with re-syllabified codas, and with nasalised codas, there were no significant differences in facilitation among the three forms. Alternatively, it is also possible that participants may not have had sufficient time to process the primes up to the point that re-syllabification or nasalisation could have been carried out. In addition, the results of a Stroop task demonstrated that the onset phoneme effect was not driven by any orthographic influence. These findings suggest that the onset segment and not the syllable is the initial (or proximate) phonological unit used in the segment-to-frame encoding process during speech planning in Korean.

    Additional information

    stimuli for experiment 1 and 2
  • Harmon, Z., Idemaru, K., & Kapatsinski, V. (2019). Learning mechanisms in cue reweighting. Cognition, 189, 76-88. doi:10.1016/j.cognition.2019.03.011.

    Abstract

    Feedback has been shown to be effective in shifting attention across perceptual cues to a phonological contrast in speech perception (Francis, Baldwin & Nusbaum, 2000). However, the learning mechanisms behind this process remain obscure. We compare the predictions of supervised error-driven learning (Rescorla & Wagner, 1972) and reinforcement learning (Sutton & Barto, 1998) using computational simulations. Supervised learning predicts downweighting of an informative cue when the learner receives evidence that it is no longer informative. In contrast, reinforcement learning suggests that a reduction in cue weight requires positive evidence for the informativeness of an alternative cue. Experimental evidence supports the latter prediction, implicating reinforcement learning as the mechanism behind the effect of feedback on cue weighting in speech perception. Native English listeners were exposed to either bimodal or unimodal VOT distributions spanning the unaspirated/aspirated boundary (bear/pear). VOT is the primary cue to initial stop voicing in English. However, lexical feedback in training indicated that VOT was no longer predictive of voicing. Reduction in the weight of VOT was observed only when participants could use an alternative cue, F0, to predict voicing. Frequency distributions had no effect on learning. Overall, the results suggest that attention shifting in learning the phonetic cues to phonological categories is accomplished using simple reinforcement learning principles that also guide the choice of actions in other domains.
  • Harneit, A., Braun, U., Geiger, L. S., Zang, Z., Hakobjan, M., Van Donkelaar, M. M. J., Schweiger, J. I., Schwarz, K., Gan, G., Erk, S., Heinz, A., Romanczuk‐Seiferth, N., Witt, S., Rietschel, M., Walter, H., Franke, B., Meyer‐Lindenberg, A., & Tost, H. (2019). MAOA-VNTR genotype affects structural and functional connectivity in distributed brain networks. Human Brain Mapping, 40(18), 5202-5212. doi:10.1002/hbm.24766.

    Abstract

    Previous studies have linked the low expression variant of a variable number of tandem repeat polymorphism in the monoamine oxidase A gene (MAOA‐L) to the risk for impulsivity and aggression, brain developmental abnormalities, altered cortico‐limbic circuit function, and an exaggerated neural serotonergic tone. However, the neurobiological effects of this variant on human brain network architecture are incompletely understood. We studied healthy individuals and used multimodal neuroimaging (sample size range: 219–284 across modalities) and network‐based statistics (NBS) to probe the specificity of MAOA‐L‐related connectomic alterations to cortical‐limbic circuits and the emotion processing domain. We assessed the spatial distribution of affected links across several neuroimaging tasks and data modalities to identify potential alterations in network architecture. Our results revealed a distributed network of node links with a significantly increased connectivity in MAOA‐L carriers compared to the carriers of the high expression (H) variant. The hyperconnectivity phenotype primarily consisted of between‐lobe (“anisocoupled”) network links and showed a pronounced involvement of frontal‐temporal connections. Hyperconnectivity was observed across functional magnetic resonance imaging (fMRI) of implicit emotion processing (pFWE = .037), resting‐state fMRI (pFWE = .022), and diffusion tensor imaging (pFWE = .044) data, while no effects were seen in fMRI data of another cognitive domain, that is, spatial working memory (pFWE = .540). These observations are in line with prior research on the MAOA‐L variant and complement these existing data by novel insights into the specificity and spatial distribution of the neurogenetic effects. Our work highlights the value of multimodal network connectomic approaches for imaging genetics.
  • Haun, D. B. M. (2003). What's so special about spatial cognition. De Psychonoom, 18, 3-4.
  • Haun, D. B. M., & Waller, D. (2003). Alignment task. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 39-48). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Haun, D. B. M. (2003). Path integration. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 33-38). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877644.
  • Haun, D. B. M. (2003). Spatial updating. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 49-56). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Haworth, S., Shapland, C. Y., Hayward, C., Prins, B. P., Felix, J. F., Medina-Gomez, C., Rivadeneira, F., Wang, C., Ahluwalia, T. S., Vrijheid, M., Guxens, M., Sunyer, J., Tachmazidou, I., Walter, K., Iotchkova, V., Jackson, A., Cleal, L., Huffmann, J., Min, J. L., Sass, L. and 15 moreHaworth, S., Shapland, C. Y., Hayward, C., Prins, B. P., Felix, J. F., Medina-Gomez, C., Rivadeneira, F., Wang, C., Ahluwalia, T. S., Vrijheid, M., Guxens, M., Sunyer, J., Tachmazidou, I., Walter, K., Iotchkova, V., Jackson, A., Cleal, L., Huffmann, J., Min, J. L., Sass, L., Timmers, P. R. H. J., UK10K consortium, Davey Smith, G., Fisher, S. E., Wilson, J. F., Cole, T. J., Fernandez-Orth, D., Bønnelykke, K., Bisgaard, H., Pennell, C. E., Jaddoe, V. W. V., Dedoussis, G., Timpson, N. J., Zeggini, E., Vitart, V., & St Pourcain, B. (2019). Low-frequency variation in TP53 has large effects on head circumference and intracranial volume. Nature Communications, 10: 357. doi:10.1038/s41467-018-07863-x.

    Abstract

    Cranial growth and development is a complex process which affects the closely related traits of head circumference (HC) and intracranial volume (ICV). The underlying genetic influences affecting these traits during the transition from childhood to adulthood are little understood, but might include both age-specific genetic influences and low-frequency genetic variation. To understand these influences, we model the developmental genetic architecture of HC, showing this is genetically stable and correlated with genetic determinants of ICV. Investigating up to 46,000 children and adults of European descent, we identify association with final HC and/or final ICV+HC at 9 novel common and low-frequency loci, illustrating that genetic variation from a wide allele frequency spectrum contributes to cranial growth. The largest effects are reported for low-frequency variants within TP53, with 0.5 cm wider heads in increaser-allele carriers versus non-carriers during mid-childhood, suggesting a previously unrecognized role of TP53 transcripts in human cranial development.

    Additional information

    Supplementary Information
  • Hayano, K. (2003). Self-presentation as a face-threatening act: A comparative study of self-oriented topic introduction in English and Japanese. Veritas, 24, 45-58.
  • Heesen, R., Fröhlich, M., Sievers, C., Woensdregt, M., & Dingemanse, M. (2022). Coordinating social action: A primer for the cross-species investigation of communicative repair. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377(1859): 20210110. doi:10.1098/rstb.2021.0110.

    Abstract

    Human joint action is inherently cooperative, manifested in the collaborative efforts of participants to minimize communicative trouble through interactive repair. Although interactive repair requires sophisticated cognitive abilities,
    it can be dissected into basic building blocks shared with non-human animal species. A review of the primate literature shows that interactionally contingent signal sequences are at least common among species of nonhuman great apes, suggesting a gradual evolution of repair. To pioneer a cross-species assessment of repair this paper aims at (i) identifying necessary precursors of human interactive repair; (ii) proposing a coding framework for its comparative study in humans and non-human species; and (iii) using this framework to analyse examples of interactions of humans (adults/children) and non-human great apes. We hope this paper will serve as a primer for cross-species comparisons of communicative breakdowns and how they are repaired.
  • Heilbron, M., Ehinger, B., Hagoort, P., & De Lange, F. P. (2019). Tracking naturalistic linguistic predictions with deep neural language models. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience (pp. 424-427). doi:10.32470/CCN.2019.1096-0.

    Abstract

    Prediction in language has traditionally been studied using
    simple designs in which neural responses to expected
    and unexpected words are compared in a categorical
    fashion. However, these designs have been contested
    as being ‘prediction encouraging’, potentially exaggerating
    the importance of prediction in language understanding.
    A few recent studies have begun to address
    these worries by using model-based approaches to probe
    the effects of linguistic predictability in naturalistic stimuli
    (e.g. continuous narrative). However, these studies
    so far only looked at very local forms of prediction, using
    models that take no more than the prior two words into
    account when computing a word’s predictability. Here,
    we extend this approach using a state-of-the-art neural
    language model that can take roughly 500 times longer
    linguistic contexts into account. Predictability estimates
    fromthe neural network offer amuch better fit to EEG data
    from subjects listening to naturalistic narrative than simpler
    models, and reveal strong surprise responses akin to
    the P200 and N400. These results show that predictability
    effects in language are not a side-effect of simple designs,
    and demonstrate the practical use of recent advances
    in AI for the cognitive neuroscience of language.
  • Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P., & De Lange, F. P. (2022). A hierarchy of linguistic predictions during natural language comprehension. Proceedings of the National Academy of Sciences of the United States of America, 119(32): e2201968119. doi:10.1073/pnas.2201968119.

    Abstract

    Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input. However, the role of prediction in language processing remains disputed, with disagreement about both the ubiquity and representational nature of predictions. Here, we address both issues by analyzing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions. First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable neural signatures of predictions about syntactic category (parts of speech), phonemes, and semantics. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.

    Additional information

    supporting information
  • Heilbron, M. (2022). Getting ahead: Prediction as a window into language, and language as a window into the predictive brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Heim, F. (2022). Singing is silver, hearing is gold: Impacts of local FoxP1 knockdowns on auditory perception and gene expression in female zebra finches. PhD Thesis, Leiden University, Leiden.
  • Hellwig, B. (2003). The grammatical coding of postural semantics in Goemai (a West Chadic language of Nigeria). PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.58463.
  • Heritage, J., & Stivers, T. (1999). Online commentary in acute medical visits: A method of shaping patient expectations. Social Science and Medicine, 49(11), 1501-1517. doi:10.1016/S0277-9536(99)00219-1.
  • Hersh, T. A., Gero, S., Rendell, L., Cantor, M., Weilgart, L., Amano, M., Dawson, S. M., Slooten, E., Johnson, C. M., Kerr, I., Payne, R., Rogan, A., Antunes, R., Andrews, O., Ferguson, E. L., Hom-Weaver, C. A., Norris, T. F., Barkley, Y. M., Merkens, K. P., Oleson, E. M. and 7 moreHersh, T. A., Gero, S., Rendell, L., Cantor, M., Weilgart, L., Amano, M., Dawson, S. M., Slooten, E., Johnson, C. M., Kerr, I., Payne, R., Rogan, A., Antunes, R., Andrews, O., Ferguson, E. L., Hom-Weaver, C. A., Norris, T. F., Barkley, Y. M., Merkens, K. P., Oleson, E. M., Doniol-Valcroze, T., Pilkington, J. F., Gordon, J., Fernandes, M., Guerra, M., Hickmott, L., & Whitehead, H. (2022). Evidence from sperm whale clans of symbolic marking in non-human cultures. Proceedings of the National Academy of Sciences of the United States of America, 119(37): e2201692119. doi:10.1073/pnas.2201692119.

    Abstract

    Culture, a pillar of the remarkable ecological success of humans, is increasingly recognized as a powerful force structuring nonhuman animal populations. A key gap between these two types of culture is quantitative evidence of symbolic markers—seemingly arbitrary traits that function as reliable indicators of cultural group membership to conspecifics. Using acoustic data collected from 23 Pacific Ocean locations, we provide quantitative evidence that certain sperm whale acoustic signals exhibit spatial patterns consistent with a symbolic marker function. Culture segments sperm whale populations into behaviorally distinct clans, which are defined based on dialects of stereotyped click patterns (codas). We classified 23,429 codas into types using contaminated mixture models and hierarchically clustered coda repertoires into seven clans based on similarities in coda usage; then we evaluated whether coda usage varied with geographic distance within clans or with spatial overlap between clans. Similarities in within-clan usage of both “identity codas” (coda types diagnostic of clan identity) and “nonidentity codas” (coda types used by multiple clans) decrease as space between repertoire recording locations increases. However, between-clan similarity in identity, but not nonidentity, coda usage decreases as clan spatial overlap increases. This matches expectations if sympatry is related to a measurable pressure to diversify to make cultural divisions sharper, thereby providing evidence that identity codas function as symbolic markers of clan identity. Our study provides quantitative evidence of arbitrary traits, resembling human ethnic markers, conveying cultural identity outside of humans, and highlights remarkable similarities in the distributions of human ethnolinguistic groups and sperm whale clans.
  • Hervais-Adelman, A., Kumar, U., Mishra, R., Tripathi, V., Guleria, A., Singh, J. P., & Huettig, F. (2022). How does literacy affect speech processing? Not by enhancing cortical responses to speech, but by promoting connectivity of acoustic-phonetic and graphomotor cortices. Journal of Neuroscience, 42(47), 8826-8841. doi:10.1523/JNEUROSCI.1125-21.2022.

    Abstract

    Previous research suggests that literacy, specifically learning alphabetic letter-to-phoneme mappings, modifies online speech processing, and enhances brain responses, as indexed by the blood-oxygenation level dependent signal (BOLD), to speech in auditory areas associated with phonological processing (Dehaene et al., 2010). However, alphabets are not the only orthographic systems in use in the world, and hundreds of millions of individuals speak languages that are not written using alphabets. In order to make claims that literacy per se has broad and general consequences for brain responses to speech, one must seek confirmatory evidence from non-alphabetic literacy. To this end, we conducted a longitudinal fMRI study in India probing the effect of literacy in Devanagari, an abugida, on functional connectivity and cerebral responses to speech in 91 variously literate Hindi-speaking male and female human participants. Twenty-two completely illiterate participants underwent six months of reading and writing training. Devanagari literacy increases functional connectivity between acoustic-phonetic and graphomotor brain areas, but we find no evidence that literacy changes brain responses to speech, either in cross-sectional or longitudinal analyses. These findings shows that a dramatic reconfiguration of the neurofunctional substrates of online speech processing may not be a universal result of learning to read, and suggest that the influence of writing on speech processing should also be investigated.
  • Hervais-Adelman, A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2019). Learning to read recycles visual cortical networks without destruction. Science Advances, 5(9): eaax0262. doi:10.1126/sciadv.aax0262.

    Abstract

    Learning to read is associated with the appearance of an orthographically sensitive brain region known as the visual word form area. It has been claimed that development of this area proceeds by impinging upon territory otherwise available for the processing of culturally relevant stimuli such as faces and houses. In a large-scale functional magnetic resonance imaging study of a group of individuals of varying degrees of literacy (from completely illiterate to highly literate), we examined cortical responses to orthographic and nonorthographic visual stimuli. We found that literacy enhances responses to other visual input in early visual areas and enhances representational similarity between text and faces, without reducing the extent of response to nonorthographic input. Thus, acquisition of literacy in childhood recycles existing object representation mechanisms but without destructive competition.

    Additional information

    aax0262_SM.pdf
  • Heyselaar, E., & Segaert, K. (2019). Memory encoding of syntactic information involves domain-general attentional resources. Evidence from dual-task studies. Quarterly Journal of Experimental Psychology, 72(6), 1285-1296. doi:10.1177/1747021818801249.

    Abstract

    We investigate the type of attention (domain-general or language-specific) used during
    syntactic processing. We focus on syntactic priming: In this task, participants listen to a
    sentence that describes a picture (prime sentence), followed by a picture the participants need
    to describe (target sentence). We measure the proportion of times participants use the
    syntactic structure they heard in the prime sentence to describe the current target sentence as a
    measure of syntactic processing. Participants simultaneously conducted a motion-object
    tracking (MOT) task, a task commonly used to tax domain-general attentional resources. We
    manipulated the number of objects the participant had to track; we thus measured
    participants’ ability to process syntax while their attention is not-, slightly-, or overly-taxed.
    Performance in the MOT task was significantly worse when conducted as a dual-task
    compared to as a single task. We observed an inverted U-shaped curve on priming magnitude
    when conducting the MOT task concurrently with prime sentences (i.e., memory encoding),
    but no effect when conducted with target sentences (i.e., memory retrieval). Our results
    illustrate how, during the encoding of syntactic information, domain-general attention
    differentially affects syntactic processing, whereas during the retrieval of syntactic
    information domain-general attention does not influence syntactic processing
  • Hickman, L. J., Keating, C. T., Ferrari, A., & Cook, J. L. (2022). Skin conductance as an index of alexithymic traits in the general population. Psychological Reports, 125(3), 1363-1379. doi:10.1177/00332941211005118.

    Abstract

    Alexithymia concerns a difficulty identifying and communicating one’s own emotions, and a tendency towards externally-oriented thinking. Recent work argues that such alexithymic traits are due to altered arousal response and poor subjective awareness of “objective” arousal responses. Although there are individual differences within the general population in identifying and describing emotions, extant research has focused on highly alexithymic individuals. Here we investigated whether mean arousal and concordance between subjective and objective arousal underpin individual differences in alexithymic traits in a general population sample. Participants rated subjective arousal responses to 60 images from the International Affective Picture System whilst their skin conductance was recorded. The Autism Quotient was employed to control for autistic traits in the general population. Analysis using linear models demonstrated that mean arousal significantly predicted Toronto Alexithymia Scale scores above and beyond autistic traits, but concordance scores did not. This indicates that, whilst objective arousal is a useful predictor in populations that are both above and below the cut-off values for alexithymia, concordance scores between objective and subjective arousal do not predict variation in alexithymic traits in the general population.
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Meyer, A. S. (2022). Quantifying the relationships between linguistic experience, general cognitive skills and linguistic processing skills. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (Eds.), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 2491-2496). Toronto, Canada: Cognitive Science Society.

    Abstract

    Humans differ greatly in their ability to use language. Contemporary psycholinguistic theories assume that individual differences in language skills arise from variability in linguistic experience and in general cognitive skills. While much previous research has tested the involvement of select verbal and non-verbal variables in select domains of linguistic processing, comprehensive characterizations of the relationships among the skills underlying language use are rare. We contribute to such a research program by re-analyzing a publicly available set of data from 112 young adults tested on 35 behavioral tests. The tests assessed nine key constructs reflecting linguistic processing skills, linguistic experience and general cognitive skills. Correlation and hierarchical clustering analyses of the test scores showed that most of the tests assumed to measure the same construct correlated moderately to strongly and largely clustered together. Furthermore, the results suggest important roles of processing speed in comprehension, and of linguistic experience in production.
  • Hoedemaker, R. S., & Meyer, A. S. (2019). Planning and coordination of utterances in a joint naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(4), 732-752. doi:10.1037/xlm0000603.

    Abstract

    Dialogue requires speakers to coordinate. According to the model of dialogue as joint action, interlocutors achieve this coordination by corepresenting their own and each other’s task share in a functionally equivalent manner. In two experiments, we investigated this corepresentation account using an interactive joint naming task in which pairs of participants took turns naming sets of objects on a shared display. Speaker A named the first, or the first and third object, and Speaker B named the second object. In control conditions, Speaker A named one, two, or all three objects and Speaker B remained silent. We recorded the timing of the speakers’ utterances and Speaker A’s eye movements. Interturn pause durations indicated that the speakers effectively coordinated their utterances in time. Speaker A’s speech onset latencies depended on the number of objects they named, but were unaffected by Speaker B’s naming task. This suggests speakers were not fully incorporating their partner’s task into their own speech planning. Moreover, Speaker A’s eye movements indicated that they were much less likely to attend to objects their partner named than to objects they named themselves. When speakers did inspect their partner’s objects, viewing times were too short to suggest that speakers were retrieving these object names as if they were planning to name the objects themselves. These results indicate that speakers prioritized planning their own responses over attending to their interlocutor’s task and suggest that effective coordination can be achieved without full corepresentation of the partner’s task.
  • Hoeksema, N., Hagoort, P., & Vernes, S. C. (2022). Piecing together the building blocks of the vocal learning bat brain. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 294-296). Nijmegen: Joint Conference on Language Evolution (JCoLE).
  • Holler, J., Drijvers, L., Rafiee, A., & Majid, A. (2022). Embodied space-pitch associations are shaped by language. Cognitive Science, 46(2): e13083. doi:10.1111/cogs.13083.

    Abstract

    Height-pitch associations are claimed to be universal and independent of language, but this claim remains controversial. The present study sheds new light on this debate with a multimodal analysis of individual sound and melody descriptions obtained in an interactive communication paradigm with speakers of Dutch and Farsi. The findings reveal that, in contrast to Dutch speakers, Farsi speakers do not use a height-pitch metaphor consistently in speech. Both Dutch and Farsi speakers’ co-speech gestures did reveal a mapping of higher pitches to higher space and lower pitches to lower space, and this gesture space-pitch mapping tended to co-occur with corresponding spatial words (high-low). However, this mapping was much weaker in Farsi speakers than Dutch speakers. This suggests that cross-linguistic differences shape the conceptualization of pitch and further calls into question the universality of height-pitch associations.

    Additional information

    supporting information
  • Holler, J. (2022). Visual bodily signals as core devices for coordinating minds in interaction. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377(1859): 20210094. doi:10.1098/rstb.2021.0094.

    Abstract

    The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed—and survived—owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine.
  • Holler, J., Bavelas, J., Woods, J., Geiger, M., & Simons, L. (2022). Given-new effects on the duration of gestures and of words in face-to-face dialogue. Discourse Processes, 59(8), 619-645. doi:10.1080/0163853X.2022.2107859.

    Abstract

    The given-new contract entails that speakers must distinguish for their addressee whether references are new or already part of their dialogue. Past research had found that, in a monologue to a listener, speakers shortened repeated words. However, the notion of the given-new contract is inherently dialogic, with an addressee and the availability of co-speech gestures. Here, two face-to-face dialogue experiments tested whether gesture duration also follows the given-new contract. In Experiment 1, four experimental sequences confirmed that when speakers repeated their gestures, they shortened the duration significantly. Experiment 2 replicated the effect with spontaneous gestures in a different task. This experiment also extended earlier results with words, confirming that speakers shortened their repeated words significantly in a multimodal dialogue setting, the basic form of language use. Because words and gestures were not necessarily redundant, these results offer another instance in which gestures and words independently serve pragmatic requirements of dialogue.
  • Holler, J., & Beattie, G. (2003). How iconic gestures and speech interact in the representation of meaning: are both aspects really integral to the process? Semiotica, 146, 81-116.
  • Holler, J., & Levinson, S. C. (2019). Multimodal language processing in human communication. Trends in Cognitive Sciences, 23(8), 639-652. doi:10.1016/j.tics.2019.05.006.

    Abstract

    Multiple layers of visual (and vocal) signals, plus their different onsets and offsets, represent a significant semantic and temporal binding problem during face-to-face conversation.
    Despite this complex unification process, multimodal messages appear to be processed faster than unimodal messages.

    Multimodal gestalt recognition and multilevel prediction are proposed to play a crucial role in facilitating multimodal language processing.

    The basis of the processing mechanisms involved in multimodal language comprehension is hypothesized to be domain general, coopted for communication, and refined with domain-specific characteristics.
    A new, situated framework for understanding human language processing is called for that takes into consideration the multilayered, multimodal nature of language and its production and comprehension in conversational interaction requiring fast processing.
  • Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3, 127-154.
  • Hömke, P. (2019). The face in face-to-face communication: Signals of understanding and non-understanding. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Hoogman, M., Van Rooij, D., Klein, M., Boedhoe, P., Ilioska, I., Li, T., Patel, Y., Postema, M., Zhang-James, Y., Anagnostou, E., Arango, C., Auzias, G., Banaschewski, T., Bau, C. H. D., Behrmann, M., Bellgrove, M. A., Brandeis, D., Brem, S., Busatto, G. F., Calderoni, S. and 60 moreHoogman, M., Van Rooij, D., Klein, M., Boedhoe, P., Ilioska, I., Li, T., Patel, Y., Postema, M., Zhang-James, Y., Anagnostou, E., Arango, C., Auzias, G., Banaschewski, T., Bau, C. H. D., Behrmann, M., Bellgrove, M. A., Brandeis, D., Brem, S., Busatto, G. F., Calderoni, S., Calvo, R., Castellanos, F. X., Coghill, D., Conzelmann, A., Daly, E., Deruelle, C., Dinstein, I., Durston, S., Ecker, C., Ehrlich, S., Epstein, J. N., Fair, D. A., Fitzgerald, J., Freitag, C. M., Frodl, T., Gallagher, L., Grevet, E. H., Haavik, J., Hoekstra, P. J., Janssen, J., Karkashadze, G., King, J. A., Konrad, K., Kuntsi, J., Lazaro, L., Lerch, J. P., Lesch, K.-P., Louza, M. R., Luna, B., Mattos, P., McGrath, J., Muratori, F., Murphy, C., Nigg, J. T., Oberwelland-Weiss, E., O'Gorman Tuura, R. L., O'Hearn, K., Oosterlaan, J., Parellada, M., Pauli, P., Plessen, K. J., Ramos-Quiroga, J. A., Reif, A., Reneman, L., Retico, A., Rosa, P. G. P., Rubia, K., Shaw, P., Silk, T. J., Tamm, L., Vilarroya, O., Walitza, S., Jahanshad, N., Faraone, S. V., Francks, C., Van den Heuvel, O. A., Paus, T., Thompson, P. M., Buitelaar, J. K., & Franke, B. (2022). Consortium neuroscience of attention deficit/hyperactivity disorder and autism spectrum disorder: The ENIGMA adventure. Human Brain Mapping, 43(1), 37-55. doi:10.1002/hbm.25029.

    Abstract

    Abstract Neuroimaging has been extensively used to study brain structure and function in individuals with attention deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD) over the past decades. Two of the main shortcomings of the neuroimaging literature of these disorders are the small sample sizes employed and the heterogeneity of methods used. In 2013 and 2014, the ENIGMA-ADHD and ENIGMA-ASD working groups were respectively, founded with a common goal to address these limitations. Here, we provide a narrative review of the thus far completed and still ongoing projects of these working groups. Due to an implicitly hierarchical psychiatric diagnostic classification system, the fields of ADHD and ASD have developed largely in isolation, despite the considerable overlap in the occurrence of the disorders. The collaboration between the ENIGMA-ADHD and -ASD working groups seeks to bring the neuroimaging efforts of the two disorders closer together. The outcomes of case–control studies of subcortical and cortical structures showed that subcortical volumes are similarly affected in ASD and ADHD, albeit with small effect sizes. Cortical analyses identified unique differences in each disorder, but also considerable overlap between the two, specifically in cortical thickness. Ongoing work is examining alternative research questions, such as brain laterality, prediction of case–control status, and anatomical heterogeneity. In brief, great strides have been made toward fulfilling the aims of the ENIGMA collaborations, while new ideas and follow-up analyses continue that include more imaging modalities (diffusion MRI and resting-state functional MRI), collaborations with other large databases, and samples with dual diagnoses.
  • Hörpel, S. G., & Firzlaff, U. (2019). Processing of fast amplitude modulations in bat auditory cortex matches communication call-specific sound features. Journal of Neurophysiology, 121(4), 1501-1512. doi:10.1152/jn.00748.2018.
  • Howe, L., Lawson, D. J., Davies, N. M., St Pourcain, B., Lewis, S. J., Smith, G. D., & Hemani, G. (2019). Genetic evidence for assortative mating on alcohol consumption in the UK Biobank. Nature Communications, 10: 5039. doi:10.1038/s41467-019-12424-x.

    Abstract

    Alcohol use is correlated within spouse-pairs, but it is difficult to disentangle effects of alcohol consumption on mate-selection from social factors or the shared spousal environment. We hypothesised that genetic variants related to alcohol consumption may, via their effect on alcohol behaviour, influence mate selection. Here, we find strong evidence that an individual’s self-reported alcohol consumption and their genotype at rs1229984, a missense variant in ADH1B, are associated with their partner’s self-reported alcohol use. Applying Mendelian randomization, we estimate that a unit increase in an individual’s weekly alcohol consumption increases partner’s alcohol consumption by 0.26 units (95% C.I. 0.15, 0.38; P = 8.20 × 10−6). Furthermore, we find evidence of spousal genotypic concordance for rs1229984, suggesting that spousal concordance for alcohol consumption existed prior to cohabitation. Although the SNP is strongly associated with ancestry, our results suggest some concordance independent of population stratification. Our findings suggest that alcohol behaviour directly influences mate selection.
  • Howe, L. J., Richardson, T. G., Arathimos, R., Alvizi, L., Passos-Bueno, M. R., Stanier, P., Nohr, E., Ludwig, K. U., Mangold, E., Knapp, M., Stergiakouli, E., St Pourcain, B., Smith, G. D., Sandy, J., Relton, C. L., Lewis, S. J., Hemani, G., & Sharp, G. C. (2019). Evidence for DNA methylation mediating genetic liability to non-syndromic cleft lip/palate. Epigenomics, 11(2), 133-145. doi:10.2217/epi-2018-0091.

    Abstract

    Aim: To determine if nonsyndromic cleft lip with or without cleft palate (nsCL/P) genetic risk variants influence liability to nsCL/P through gene regulation pathways, such as those involving DNA methylation. Materials & methods: nsCL/P genetic summary data and methylation data from four studies were used in conjunction with Mendelian randomization and joint likelihood mapping to investigate potential mediation of nsCL/P genetic variants. Results & conclusion: Evidence was found at VAX1 (10q25.3), LOC146880 (17q23.3) and NTN1 (17p13.1), that liability to nsCL/P and variation in DNA methylation might be driven by the same genetic variant, suggesting that genetic variation at these loci may increase liability to nsCL/P by influencing DNA methylation. Follow-up analyses using different tissues and gene expression data provided further insight into possible biological mechanisms.

    Additional information

    Supplementary material
  • Hubbard, R. J., Rommers, J., Jacobs, C. L., & Federmeier, K. D. (2019). Downstream behavioral and electrophysiological consequences of word prediction on recognition memory. Frontiers in Human Neuroscience, 13: 291. doi:10.3389/fnhum.2019.00291.

    Abstract

    When people process language, they can use context to predict upcoming information,
    influencing processing and comprehension as seen in both behavioral and neural
    measures. Although numerous studies have shown immediate facilitative effects
    of confirmed predictions, the downstream consequences of prediction have been
    less explored. In the current study, we examined those consequences by probing
    participants’ recognition memory for words after they read sets of sentences.
    Participants read strongly and weakly constraining sentences with expected or
    unexpected endings (“I added my name to the list/basket”), and later were tested on
    their memory for the sentence endings while EEG was recorded. Critically, the memory
    test contained words that were predictable (“list”) but were never read (participants
    saw “basket”). Behaviorally, participants showed successful discrimination between old
    and new items, but false alarmed to the expected-item lures more often than to new
    items, showing that predicted words or concepts can linger, even when predictions
    are disconfirmed. Although false alarm rates did not differ by constraint, event-related
    potentials (ERPs) differed between false alarms to strongly and weakly predictable words.
    Additionally, previously unexpected (compared to previously expected) endings that
    appeared on the memory test elicited larger N1 and LPC amplitudes, suggesting greater
    attention and episodic recollection. In contrast, highly predictable sentence endings that
    had been read elicited reduced LPC amplitudes during the memory test. Thus, prediction
    can facilitate processing in the moment, but can also lead to false memory and reduced
    recollection for predictable information.
  • Hubers, F., Cucchiarini, C., Strik, H., & Dijkstra, T. (2019). Normative data of Dutch idiomatic expressions: Subjective judgments you can bank on. Frontiers in Psychology, 10: 1075. doi:10.3389/fpsyg.2019.01075.

    Abstract

    The processing of idiomatic expressions is a topical issue in empirical research. Various factors have been found to influence idiom processing, such as idiom familiarity and idiom transparency. Information on these variables is usually obtained through norming studies. Studies investigating the effect of various properties on idiom processing have led to ambiguous results. This may be due to the variability of operationalizations of the idiom properties across norming studies, which in turn may affect the reliability of the subjective judgements. However, not all studies that collected normative data on idiomatic expressions investigated their reliability, and studies that did address the reliability of subjective ratings used various measures and produced mixed results. In this study, we investigated the reliability of subjective judgements, the relation between subjective and objective idiom frequency, and the impact of these dimensions on the participants’ idiom knowledge by collecting normative data of five subjective idiom properties (Frequency of Exposure, Meaning Familiarity, Frequency of Usage, Transparency, and Imageability) from 390 native speakers and objective corpus frequency for 374 Dutch idiomatic expressions. For reliability, we compared measures calculated in previous studies, with the D-coefficient, a metric taken from Generalizability Theory. High reliability was found for all subjective dimensions. One reliability metric, Krippendorff’s alpha, generally produced lower values, while similar values were obtained for three other measures (Cronbach’s alpha, Intraclass Correlation Coefficient, and the D-coefficient). Advantages of the D-coefficient are that it can be applied to unbalanced research designs, and to estimate the minimum number of raters required to obtain reliable ratings. Slightly higher coefficients were observed for so-called experience-based dimensions (Frequency of Exposure, Meaning Familiarity, and Frequency of Usage) than for content-based dimensions (Transparency and Imageability). In addition, fewer raters were required to obtain reliable ratings for the experience-based dimensions. Subjective and objective frequency appeared to be poorly correlated, while all subjective idiom properties and objective frequency turned out to affect idiom knowledge. Meaning Familiarity, Subjective and Objective Frequency of Exposure, Frequency of Usage, and Transparency positively contributed to idiom knowledge, while a negative effect was found for Imageability. We discuss these relationships in more detail, and give methodological recommendations with respect to the procedures and the measure to calculate reliability.

    Additional information

    supplementary material
  • Huettig, F., & Pickering, M. (2019). Literacy advantages beyond reading: Prediction of spoken language. Trends in Cognitive Sciences, 23(6), 464-475. doi:10.1016/j.tics.2019.03.008.

    Abstract

    Literacy has many obvious benefits—it exposes the reader to a wealth of new information and enhances syntactic knowledge. However, we argue that literacy has an additional, often overlooked, benefit: it enhances people’s ability to predict spoken language thereby aiding comprehension. Readers are under pressure to process information more quickly than listeners, and reading provides excellent conditions, in particular a stable environment, for training the predictive system. It also leads to increased awareness of words as linguistic units, and more fine-grained phonological and additional orthographic representations, which sharpen lexical representations and facilitate predicted representations to be retrieved. Thus, reading trains core processes and representations involved in language prediction that are common to both reading and listening.
  • Huettig, F., Audring, J., & Jackendoff, R. (2022). A parallel architecture perspective on pre-activation and prediction in language processing. Cognition, 224: 105050. doi:10.1016/j.cognition.2022.105050.

    Abstract

    A recent trend in psycholinguistic research has been to posit prediction as an essential function of language processing. The present paper develops a linguistic perspective on viewing prediction in terms of pre-activation. We describe what predictions are and how they are produced. Our basic premises are that (a) no prediction can be made without knowledge to support it; and (b) it is therefore necessary to characterize the precise form of that knowledge, as revealed by a suitable theory of linguistic representations. We describe the Parallel Architecture (PA: Jackendoff, 2002; Jackendoff and Audring, 2020), which makes explicit our commitments about linguistic representations, and we develop an account of processing based on these representations. Crucial to our account is that what have been traditionally treated as derivational rules of grammar are formalized by the PA as lexical items, encoded in the same format as words. We then present a theory of prediction in these terms: linguistic input activates lexical items whose beginning (or incipit) corresponds to the input encountered so far; and prediction amounts to pre-activation of the as yet unheard parts of those lexical items (the remainder). Thus the generation of predictions is a natural byproduct of processing linguistic representations. We conclude that the PA perspective on pre-activation provides a plausible account of prediction in language processing that bridges linguistic and psycholinguistic theorizing.
  • Huettig, F., & Guerra, E. (2019). Effects of speech rate, preview time of visual context, and participant instructions reveal strong limits on prediction in language processing. Brain Research, 1706, 196-208. doi:10.1016/j.brainres.2018.11.013.

    Abstract

    There is a consensus among language researchers that people can predict upcoming language. But do people always predict when comprehending language? Notions that “brains … are essentially prediction machines” certainly suggest so. In three eye-tracking experiments we tested this view. Participants listened to simple Dutch sentences (‘Look at the displayed bicycle’) while viewing four objects (a target, e.g. a bicycle, and three unrelated distractors). We used the identical visual stimuli and the same spoken sentences but varied speech rates, preview time, and participant instructions. Target nouns were preceded by definite gender-marked determiners, which allowed participants to predict the target object because only the targets but not the distractors agreed in gender with the determiner. In Experiment 1, participants had four seconds preview and sentences were presented either in a slow or a normal speech rate. Participants predicted the targets as soon as they heard the determiner in both conditions. Experiment 2 was identical except that participants were given only a one second preview. Participants predicted the targets only in the slow speech condition. Experiment 3 was identical to Experiment 2 except that participants were explicitly told to predict. This led only to a small prediction effect in the normal speech condition. Thus, a normal speech rate only afforded prediction if participants had an extensive preview. Even the explicit instruction to predict the target resulted in only a small anticipation effect with a normal speech rate and a short preview. These findings are problematic for theoretical proposals that assume that prediction pervades cognition.
  • Huisman, J. L. A., Majid, A., & Van Hout, R. (2019). The geographical configuration of a language area influences linguistic diversity. PLoS One, 14(6): e0217363. doi:10.1371/journal.pone.0217363.

    Abstract

    Like the transfer of genetic variation through gene flow, language changes constantly as a result of its use in human interaction. Contact between speakers is most likely to happen when they are close in space, time, and social setting. Here, we investigated the role of geographical configuration in this process by studying linguistic diversity in Japan, which comprises a large connected mainland (less isolation, more potential contact) and smaller island clusters of the Ryukyuan archipelago (more isolation, less potential contact). We quantified linguistic diversity using dialectometric methods, and performed regression analyses to assess the extent to which distance in space and time predict contemporary linguistic diversity. We found that language diversity in general increases as geographic distance increases and as time passes—as with biodiversity. Moreover, we found that (I) for mainland languages, linguistic diversity is most strongly related to geographic distance—a so-called isolation-by-distance pattern, and that (II) for island languages, linguistic diversity reflects the time since varieties separated and diverged—an isolation-by-colonisation pattern. Together, these results confirm previous findings that (linguistic) diversity is shaped by distance, but also goes beyond this by demonstrating the critical role of geographic configuration.
  • Huizeling, E., Arana, S., Hagoort, P., & Schoffelen, J.-M. (2022). Lexical frequency and sentence context influence the brain’s response to single words. Neurobiology of Language, 3(1), 149-179. doi:10.1162/nol_a_00054.

    Abstract

    Typical adults read remarkably quickly. Such fast reading is facilitated by brain processes that are sensitive to both word frequency and contextual constraints. It is debated as to whether these attributes have additive or interactive effects on language processing in the brain. We investigated this issue by analysing existing magnetoencephalography data from 99 participants reading intact and scrambled sentences. Using a cross-validated model comparison scheme, we found that lexical frequency predicted the word-by-word elicited MEG signal in a widespread cortical network, irrespective of sentential context. In contrast, index (ordinal word position) was more strongly encoded in sentence words, in left front-temporal areas. This confirms that frequency influences word processing independently of predictability, and that contextual constraints affect word-by-word brain responses. With a conservative multiple comparisons correction, only the interaction between lexical frequency and surprisal survived, in anterior temporal and frontal cortex, and not between lexical frequency and entropy, nor between lexical frequency and index. However, interestingly, the uncorrected index*frequency interaction revealed an effect in left frontal and temporal cortex that reversed in time and space for intact compared to scrambled sentences. Finally, we provide evidence to suggest that, in sentences, lexical frequency and predictability may independently influence early (<150ms) and late stages of word processing, but interact during later stages of word processing (>150-250ms), thus helping to converge previous contradictory eye-tracking and electrophysiological literature. Current neuro-cognitive models of reading would benefit from accounting for these differing effects of lexical frequency and predictability on different stages of word processing.
  • Huizeling, E., Peeters, D., & Hagoort, P. (2022). Prediction of upcoming speech under fluent and disfluent conditions: Eye tracking evidence from immersive virtual reality. Language, Cognition and Neuroscience, 37(4), 481-508. doi:10.1080/23273798.2021.1994621.

    Abstract

    Traditional experiments indicate that prediction is important for efficient speech processing. In three virtual reality visual world paradigm experiments, we tested whether such findings hold in naturalistic settings (Experiment 1) and provided novel insights into whether disfluencies in speech (repairs/hesitations) inform one’s predictions in rich environments (Experiments 2–3). Experiment 1 supports that listeners predict upcoming speech in naturalistic environments, with higher proportions of anticipatory target fixations in predictable compared to unpredictable trials. In Experiments 2–3, disfluencies reduced anticipatory fixations towards predicted referents, compared to conjunction (Experiment 2) and fluent (Experiment 3) sentences. Unexpectedly, Experiment 2 provided no evidence that participants made new predictions from a repaired verb. Experiment 3 provided novel findings that fixations towards the speaker increase upon hearing a hesitation, supporting current theories of how hesitations influence sentence processing. Together, these findings unpack listeners’ use of visual (objects/speaker) and auditory (speech/disfluencies) information when predicting upcoming words.
  • Hulten, A., Schoffelen, J.-M., Udden, J., Lam, N. H. L., & Hagoort, P. (2019). How the brain makes sense beyond the processing of single words – An MEG study. NeuroImage, 186, 586-594. doi:10.1016/j.neuroimage.2018.11.035.

    Abstract

    Human language processing involves combinatorial operations that make human communication stand out in the animal kingdom. These operations rely on a dynamic interplay between the inferior frontal and the posterior temporal cortices. Using source reconstructed magnetoencephalography, we tracked language processing in the brain, in order to investigate how individual words are interpreted when part of sentence context. The large sample size in this study (n = 68) allowed us to assess how event-related activity is associated across distinct cortical areas, by means of inter-areal co-modulation within an individual. We showed that, within 500 ms of seeing a word, the word's lexical information has been retrieved and unified with the sentence context. This does not happen in a strictly feed-forward manner, but by means of co-modulation between the left posterior temporal cortex (LPTC) and left inferior frontal cortex (LIFC), for each individual word. The co-modulation of LIFC and LPTC occurs around 400 ms after the onset of each word, across the progression of a sentence. Moreover, these core language areas are supported early on by the attentional network. The results provide a detailed description of the temporal orchestration related to single word processing in the context of ongoing language.

    Additional information

    1-s2.0-S1053811918321165-mmc1.pdf
  • Hustá, C., Dalmaijer, E., Belopolsky, A., & Mathôt, S. (2019). The pupillary light response reflects visual working memory content. Journal of Experimental Psychology: Human Perception and Performance, 45(11), 1522-1528. doi:10.1037/xhp0000689.

    Abstract

    Recent studies have shown that the pupillary light response (PLR) is modulated by higher cognitive functions, presumably through activity in visual sensory brain areas. Here we use the PLR to test the involvement of sensory areas in visual working memory (VWM). In two experiments, participants memorized either bright or dark stimuli. We found that pupils were smaller when a prestimulus cue indicated that a bright stimulus should be memorized; this reflects a covert shift of attention during encoding of items into VWM. Crucially, we obtained the same result with a poststimulus cue, which shows that internal shifts of attention within VWM affect pupil size as well. Strikingly, the effect of VWM content on pupil size was most pronounced immediately after the poststimulus cue, and then dissipated. This suggests that a shift of attention within VWM momentarily activates an "active" memory representation, but that this representation quickly transforms into a "hidden" state that does not rely on sensory areas.

    Additional information

    Supplementary_xhp0000689.docx
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2019). How in-group bias influences source memory for words learned from in-group and out-group speakers. Frontiers in Human Neuroscience, 13: 308. doi:10.3389/fnhum.2019.00308.

    Abstract

    Individuals rapidly extract information about others’ social identity, including whether or not they belong to their in-group. Group membership status has been shown to affect how attentively people encode information conveyed by those others. These findings are highly relevant for the field of psycholinguistics where there exists an open debate on how words are represented in the mental lexicon and how abstract or context-specific these representations are. Here, we used a novel word learning paradigm to test our proposal that the group membership status of speakers also affects how speaker-specific representations of novel words are. Participants learned new words from speakers who either attended their own university (in-group speakers) or did not (out-group speakers) and performed a task to measure their individual in-group bias. Then, their source memory of the new words was tested in a recognition test to probe the speaker-specific content of the novel lexical representations and assess how it related to individual in-group biases. We found that speaker group membership and participants’ in-group bias affected participants’ decision biases. The stronger the in-group bias, the more cautious participants were in their decisions. This was particularly applied to in-group related decisions. These findings indicate that social biases can influence recognition threshold. Taking a broader scope, defining how information is represented is a topic of great overlap between the fields of memory and psycholinguistics. Nevertheless, researchers from these fields tend to stay within the theoretical and methodological borders of their own field, missing the chance to deepen their understanding of phenomena that are of common interest. Here we show how methodologies developed in the memory field can be implemented in language research to shed light on an important theoretical issue that relates to the composition of lexical representations.

    Additional information

    Supplementary material
  • Indefrey, P., & Levelt, W. J. M. (1999). A meta-analysis of neuroimaging experiments on word production. Neuroimage, 7, 1028.
  • Indefrey, P. (1999). Some problems with the lexical status of nondefault inflection. Behavioral and Brain Sciences, 22(6), 1025. doi:10.1017/S0140525X99342229.

    Abstract

    Clahsen's characterization of nondefault inflection as based exclusively on lexical entries does not capture the full range of empirical data on German inflection. In the verb system differential effects of lexical frequency seem to be input-related rather than affecting morphological production. In the noun system, the generalization properties of -n and -e plurals exceed mere analogy-based productivity.
  • Ioumpa, K., Graham, S. A., Clausner, T., Fisher, S. E., Van Lier, R., & Van Leeuwen, T. M. (2019). Enhanced self-reported affect and prosocial behaviour without differential physiological responses in mirror-sensory synaesthesia. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190395. doi:10.1098/rstb.2019.0395.

    Abstract

    Mirror-sensory synaesthetes mirror the pain or touch that they observe in other people on their own bodies. This type of synaesthesia has been associated with enhanced empathy. We investigated whether the enhanced empathy of people with mirror-sensory synesthesia influences the experience of situations involving touch or pain and whether it affects their prosocial decision making. Mirror-sensory synaesthetes (N = 18, all female), verified with a touch-interference paradigm, were compared with a similar number of age-matched control individuals (all female). Participants viewed arousing images depicting pain or touch; we recorded subjective valence and arousal ratings, and physiological responses, hypothesizing more extreme reactions in synaesthetes. The subjective impact of positive and negative images was stronger in synaesthetes than in control participants; the stronger the reported synaesthesia, the more extreme the picture ratings. However, there was no evidence for differential physiological or hormonal responses to arousing pictures. Prosocial decision making was assessed with an economic game assessing altruism, in which participants had to divide money between themselves and a second player. Mirror-sensory synaesthetes donated more money than non-synaesthetes, showing enhanced prosocial behaviour, and also scored higher on the Interpersonal Reactivity Index as a measure of empathy. Our study demonstrates the subjective impact of mirror-sensory synaesthesia and its stimulating influence on prosocial behaviour.

    Files private

    Request files
  • Isbilen, E. S., Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2022). Statistically based chunking of nonadjacent dependencies. Journal of Experimental Psychology: General, 151(11), 2623-2640. doi:10.1037/xge0001207.

    Abstract

    How individuals learn complex regularities in the environment and generalize them to new instances is a key question in cognitive science. Although previous investigations have advocated the idea that learning and generalizing depend upon separate processes, the same basic learning mechanisms may account for both. In language learning experiments, these mechanisms have typically been studied in isolation of broader cognitive phenomena such as memory, perception, and attention. Here, we show how learning and generalization in language is embedded in these broader theories by testing learners on their ability to chunk nonadjacent dependencies—a key structure in language but a challenge to theories that posit learning through the memorization of structure. In two studies, adult participants were trained and tested on an artificial language containing nonadjacent syllable dependencies, using a novel chunking-based serial recall task involving verbal repetition of target sequences (formed from learned strings) and scrambled foils. Participants recalled significantly more syllables, bigrams, trigrams, and nonadjacent dependencies from sequences conforming to the language’s statistics (both learned and generalized sequences). They also encoded and generalized specific nonadjacent chunk information. These results suggest that participants chunk remote dependencies and rapidly generalize this information to novel structures. The results thus provide further support for learning-based approaches to language acquisition, and link statistical learning to broader cognitive mechanisms of memory.
  • Iyer, S., Sam, F. S., DiPrimio, N., Preston, G., Verheijen, J., Murthy, K., Parton, Z., Tsang, H., Lao, J., Morava, E., & Perlstein, E. O. (2019). Repurposing the aldose reductase inhibitor and diabetic neuropathy drug epalrestat for the congenital disorder of glycosylation PMM2-CDG. Disease models & mechanisms, 12(11): UNSP dmm040584. doi:10.1242/dmm.040584.

    Abstract

    Phosphomannomutase 2 deficiency, or PMM2-CDG, is the most common congenital disorder of glycosylation and affects over 1000 patients globally. There are no approved drugs that treat the symptoms or root cause of PMM2-CDG. To identify clinically actionable compounds that boost human PMM2 enzyme function, we performed a multispecies drug repurposing screen using a novel worm model of PMM2-CDG, followed by PMM2 enzyme functional studies in PMM2-CDG patient fibroblasts. Drug repurposing candidates from this study, and drug repurposing candidates from a previously published study using yeast models of PMM2-CDG, were tested for their effect on human PMM2 enzyme activity in PMM2-CDG fibroblasts. Of the 20 repurposing candidates discovered in the worm-based phenotypic screen, 12 were plant-based polyphenols. Insights from structure-activity relationships revealed epalrestat, the only antidiabetic aldose reductase inhibitor approved for use in humans, as a first-in-class PMM2 enzyme activator. Epalrestat increased PMM2 enzymatic activity in four PMM2-CDG patient fibroblast lines with genotypes R141H/F119L, R141H/E139K, R141H/N216I and R141H/F183S. PMM2 enzyme activity gains ranged from 30% to 400% over baseline, depending on genotype. Pharmacological inhibition of aldose reductase by epalrestat may shunt glucose from the polyol pathway to glucose-1,6-bisphosphate, which is an endogenous stabilizer and coactivator of PMM2 homodimerization. Epalrestat is a safe, oral and brain penetrant drug that was approved 27 years ago in Japan to treat diabetic neuropathy in geriatric populations. We demonstrate that epalrestat is the first small molecule activator ofPMM2 enzyme activity with the potential to treat peripheral neuropathy and correct the underlying enzyme deficiency in a majority of pediatric and adult PMM2-CDG patients.

    Additional information

    DMM040584supp.pdf
  • Janse, E., & Quené, H. (1999). On the suitability of the cross-modal semantic priming task. In Proceedings of the XIVth International Congress of Phonetic Sciences (pp. 1937-1940).
  • Janse, E. (2003). Word perception in natural-fast and artificially time-compressed speech. In M. SolÉ, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of the Phonetic Sciences (pp. 3001-3004).
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Janssen, D. (1999). Producing past and plural inflections. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057667.
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2019). Comparing effects of instruction on word meaning and word form on early literacy abilities in kindergarten. Early Education and Development, 30(3), 375-399. doi:10.1080/10409289.2018.1547563.

    Abstract

    Research Findings: The present study compared effects of explicit instruction on and practice with the phonological form of words (form-focused instruction) versus explicit instruction on and practice with the meaning of words (meaning-focused instruction). Instruction was given via interactive storybook reading in the kindergarten classroom of children learning Dutch. We asked whether the 2 types of instruction had different effects on vocabulary development and 2 precursors of reading ability—phonological awareness and letter knowledge—and we examined effects on these measures of the ability to learn new words with minimal acoustic-phonetic differences. Learners showed similar receptive target-word vocabulary gain after both types of instruction, but learners who received form-focused vocabulary instruction showed more gain in semantic knowledge of target vocabulary, phonological awareness, and letter knowledge than learners who received meaning-focused vocabulary instruction. Level of ability to learn pairs of words with minimal acoustic-phonetic differences predicted gain in semantic knowledge of target vocabulary and in letter knowledge in the form-focused instruction group only. Practice or Policy: A focus on the form of words during instruction appears to have benefits for young children learning vocabulary.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2019). The effects of larynx height on vowel production are mitigated by the active control of articulators. Journal of Phonetics, 74, 1-17. doi:10.1016/j.wocn.2019.02.002.

    Abstract

    The influence of larynx position on vowel articulation is an important topic in understanding speech production, the present-day distribution of linguistic diversity and the evolution of speech and language in our lineage. We introduce here a realistic computer model of the vocal tract, constructed from actual human MRI data, which can learn, using machine learning techniques, to control the articulators in such a way as to produce speech sounds matching as closely as possible to a given set of target vowels. We systematically control the vertical position of the larynx and we quantify the differences between the target and produced vowels for each such position across multiple replications. We report that, indeed, larynx height does affect the accuracy of reproducing the target vowels and the distinctness of the produced vowel system, that there is a “sweet spot” of larynx positions that are optimal for vowel production, but that nevertheless, even extreme larynx positions do not result in a collapsed or heavily distorted vowel space that would make speech unintelligible. Together with other lines of evidence, our results support the view that the vowel space of human languages is influenced by our larynx position, but that other positions of the larynx may also be fully compatible with speech.

    Additional information

    Research Data via Github
  • Janssens, S. E. W., Sack, A. T., Ten Oever, S., & Graaf, T. A. (2022). Calibrating rhythmic stimulation parameters to individual electroencephalography markers: The consistency of individual alpha frequency in practical lab settings. European Journal of Neuroscience, 55(11/12), 3418-3437. doi:10.1111/ejn.15418.

    Abstract

    Rhythmic stimulation can be applied to modulate neuronal oscillations. Such ‘entrainment’ is optimized when stimulation frequency is individually calibrated based on magneto/encephalography markers. It remains unknown how consistent such individual markers are across days/sessions, within a session, or across cognitive states, hemispheres and estimation methods, especially in a realistic, practical, lab setting. We here estimated individual alpha frequency (IAF) repeatedly from short electroencephalography (EEG) measurements at rest or during an attention task (cognitive state), using single parieto-occipital electrodes in 24 participants on 4 days (between-sessions), with multiple measurements over an hour on 1 day (within-session). First, we introduce an algorithm to automatically reject power spectra without a sufficiently clear peak to ensure unbiased IAF estimations. Then we estimated IAF via the traditional ‘maximum’ method and a ‘Gaussian fit’ method. IAF was reliable within- and between-sessions for both cognitive states and hemispheres, though task-IAF estimates tended to be more variable. Overall, the ‘Gaussian fit’ method was more reliable than the ‘maximum’ method. Furthermore, we evaluated how far from an approximated ‘true’ task-related IAF the selected ‘stimulation frequency’ was, when calibrating this frequency based on a short rest-EEG, a short task-EEG, or simply selecting 10 Hz for all participants. For the ‘maximum’ method, rest-EEG calibration was best, followed by task-EEG, and then 10 Hz. For the ‘Gaussian fit’ method, rest-EEG and task-EEG-based calibration were similarly accurate, and better than 10 Hz. These results lead to concrete recommendations about valid, and automated, estimation of individual oscillation markers in experimental and clinical settings.
  • Janssens, S. E., Ten Oever, S., Sack, A. T., & de Graaf, T. A. (2022). “Broadband Alpha Transcranial Alternating Current Stimulation”: Exploring a new biologically calibrated brain stimulation protocol. NeuroImage, 253: 119109. doi:10.1016/j.neuroimage.2022.119109.

    Abstract

    Transcranial alternating current stimulation (tACS) can be used to study causal contributions of oscillatory brain mechanisms to cognition and behavior. For instance, individual alpha frequency (IAF) tACS was reported to enhance alpha power and impact visuospatial attention performance. Unfortunately, such results have been inconsistent and difficult to replicate. In tACS, stimulation generally involves one frequency, sometimes individually calibrated to a peak value observed in an M/EEG power spectrum. Yet, the ‘peak’ actually observed in such power spectra often contains a broader range of frequencies, raising the question whether a biologically calibrated tACS protocol containing this fuller range of alpha-band frequencies might be more effective. Here, we introduce ‘Broadband-alpha-tACS’, a complex individually calibrated electrical stimulation protocol. We band-pass filtered left posterior resting-state EEG data around the IAF (+/- 2 Hz), and converted that time series into an electrical waveform for tACS stimulation of that same left posterior parietal cortex location. In other words, we stimulated a brain region with a ‘replay’ of its own alpha-band frequency content, based on spontaneous activity. Within-subjects (N=24), we compared to a sham tACS session the effects of broadband-alpha tACS, power-matched spectral inverse (‘alpha-removed’) control tACS, and individual alpha frequency tACS, on EEG alpha power and performance in an endogenous attention task previously reported to be affected by alpha tACS. Broadband-alpha-tACS significantly modulated attention task performance (i.e., reduced the rightward visuospatial attention bias in trials without distractors, and reduced attention benefits). Alpha-removed tACS also reduced the rightward visuospatial attention bias. IAF-tACS did not significantly modulate attention task performance compared to sham tACS, but also did not statistically significantly differ from broadband-alpha-tACS. This new broadband-alpha tACS approach seems promising, but should be further explored and validated in future studies.

    Additional information

    supplementary materials

Share this page