Publications

Displaying 301 - 400 of 583
  • Mai, A., Riès, A. M., Ben-Haim, S., Shih, J., & Gentner, T. (2022). Phonological Contrasts Are Maintained Despite Neutralization: an Intracranial EEG Study. Proceedings of the Linguistic Society of America: Proceedings of the 2021 Annual Meeting on Phonology, 9. doi:10.3765/amp.v9i0.5197.

    Abstract

    The existence of language-specific abstract sound-structure units (such as the phoneme) is largely uncontroversial in phonology. However, whether the brain performs abstractions comparable to those assumed in phonology has been difficult to ascertain. Using intracranial electroencephalography (EEG) recorded during a passive listening task, this study investigates the representation of phonological units in the brain and the relationship between those units, auditory sensory input, and higher levels of language organization, namely morphology. Leveraging the phonological neutralization of coronal stops to tap in English, this study provides evidence of a dissociation between acoustic similarity and phonemic identity in the neural response to speech. Moreover, leveraging morphophonological alternations of the regular plural and past tense, this study further demonstrates early (<500ms) evidence of dissociation between phonological form and morphological exponence. Together these results highlight the central nature of language-specific knowledge in sublexical language processing and improve our understanding of the ways language-specific knowledge structures and organizes speech perception in the brain.
  • Maihofer, A. X., Choi, K. W., Coleman, J. R., Daskalakis, N. P., Denckla, C. A., Ketema, E., Morey, R. A., Polimanti, R., Ratanatharathorn, A., Torres, K., Wingo, A. P., Zai, C. C., Aiello, A. E., Almli, L. M., Amstadter, A. B., Andersen, S. B., Andreassen, O. A., Arbisi, P. A., Ashley-Koch, A. E., Austin, S. B. and 161 moreMaihofer, A. X., Choi, K. W., Coleman, J. R., Daskalakis, N. P., Denckla, C. A., Ketema, E., Morey, R. A., Polimanti, R., Ratanatharathorn, A., Torres, K., Wingo, A. P., Zai, C. C., Aiello, A. E., Almli, L. M., Amstadter, A. B., Andersen, S. B., Andreassen, O. A., Arbisi, P. A., Ashley-Koch, A. E., Austin, S. B., Avdibegovic, E., Borglum, A. D., Babic, D., Bækvad-Hansen, M., Baker, D. G., Beckham, J. C., Bierut, L. J., Bisson, J. I., Boks, M. P., Bolger, E. A., Bradley, B., Brashear, M., Breen, G., Bryant, R. A., Bustamante, A. C., Bybjerg-Grauholm, J., Calabrese, J. R., Caldas-de-Almeida, J. M., Chen, C.-Y., Dale, A. M., Dalvie, S., Deckert, J., Delahanty, D. L., Dennis, M. F., Disner, S. G., Domschke, K., Duncan, L. E., Dzubur Kulenovic, A., Erbes, C. R., Evans, A., Farrer, L. A., Feeny, N. C., Flory, J. D., Forbes, D., Franz, C. E., Galea, S., Garrett, M. E., Gautam, A., Gelaye, B., Gelernter, J., Geuze, E., Gillespie, C. F., Goçi, A., Gordon, S. D., Guffanti, G., Hammamieh, R., Hauser, M. A., Heath, A. C., Hemmings, S. M., Hougaard, D. M., Jakovljevic, M., Jett, M., Johnson, E. O., Jones, I., Jovanovic, T., Qin, X.-J., Karstoft, K.-I., Kaufman, M. L., Kessler, R. C., Khan, A., Kimbrel, N. A., King, A. P., Koen, N., Kranzler, H. R., Kremen, W. S., Lawford, B. R., Lebois, L. A., Lewis, C., Liberzon, I., Linnstaedt, S. D., Logue, M. W., Lori, A., Lugonja, B., Luykx, J. J., Lyons, M. J., Maples-Keller, J. L., Marmar, C., Martin, N. G., Maurer, D., Mavissakalian, M. R., McFarlane, A., McGlinchey, R. E., McLaughlin, K. A., McLean, S. A., Mehta, D., Mellor, R., Michopoulos, V., Milberg, W., Miller, M. W., Morris, C. P., Mors, O., Mortensen, P. B., Nelson, E. C., Nordentoft, M., Norman, S. B., O’Donnell, M., Orcutt, H. K., Panizzon, M. S., Peters, E. S., Peterson, A. L., Peverill, M., Pietrzak, R. H., Polusny, M. A., Rice, J. P., Risbrough, V. B., Roberts, A. L., Rothbaum, A. O., Rothbaum, B. O., Roy-Byrne, P., Ruggiero, K. J., Rung, A., Rutten, B. P., Saccone, N. L., Sanchez, S. E., Schijven, D., Seedat, S., Seligowski, A. V., Seng, J. S., Sheerin, C. M., Silove, D., Smith, A. K., Smoller, J. W., Sponheim, S. R., Stein, D. J., Stevens, J. S., Teicher, M. H., Thompson, W. K., Trapido, E., Uddin, M., Ursano, R. J., van den Heuvel, L. L., Van Hooff, M., Vermetten, E., Vinkers, C., Voisey, J., Wang, Y., Wang, Z., Werge, T., Williams, M. A., Williamson, D. E., Winternitz, S., Wolf, C., Wolf, E. J., Yehuda, R., Young, K. A., Young, R. M., Zhao, H., Zoellner, L. A., Haas, M., Lasseter, H., Provost, A. C., Salem, R. M., Sebat, J., Shaffer, R. A., Wu, T., Ripke, S., Daly, M. J., Ressler, K. J., Koenen, K. C., Stein, M. B., & Nievergelt, C. M. (2022). Enhancing discovery of genetic variants for posttraumatic stress disorder through integration of quantitative phenotypes and trauma exposure information. Biological Psychiatry, 91(7), 626-636. doi:10.1016/j.biopsych.2021.09.020.

    Abstract

    Background

    Posttraumatic stress disorder (PTSD) is heritable and a potential consequence of exposure to traumatic stress. Evidence suggests that a quantitative approach to PTSD phenotype measurement and incorporation of lifetime trauma exposure (LTE) information could enhance the discovery power of PTSD genome-wide association studies (GWASs).
    Methods

    A GWAS on PTSD symptoms was performed in 51 cohorts followed by a fixed-effects meta-analysis (N = 182,199 European ancestry participants). A GWAS of LTE burden was performed in the UK Biobank cohort (N = 132,988). Genetic correlations were evaluated with linkage disequilibrium score regression. Multivariate analysis was performed using Multi-Trait Analysis of GWAS. Functional mapping and annotation of leading loci was performed with FUMA. Replication was evaluated using the Million Veteran Program GWAS of PTSD total symptoms.
    Results

    GWASs of PTSD symptoms and LTE burden identified 5 and 6 independent genome-wide significant loci, respectively. There was a 72% genetic correlation between PTSD and LTE. PTSD and LTE showed largely similar patterns of genetic correlation with other traits, albeit with some distinctions. Adjusting PTSD for LTE reduced PTSD heritability by 31%. Multivariate analysis of PTSD and LTE increased the effective sample size of the PTSD GWAS by 20% and identified 4 additional loci. Four of these 9 PTSD loci were independently replicated in the Million Veteran Program.
    Conclusions

    Through using a quantitative trait measure of PTSD, we identified novel risk loci not previously identified using prior case-control analyses. PTSD and LTE have a high genetic overlap that can be leveraged to increase discovery power through multivariate methods.
  • Mak, M., Faber, M., & Willems, R. M. (2022). Different routes to liking: How readers arrive at narrative evaluations. Cognitive Research: Principles and implications, 7: 72. doi:10.1186/s41235-022-00419-0.

    Abstract

    When two people read the same story, they might both end up liking it very much. However, this does not necessarily mean that their reasons for liking it were identical. We therefore ask what factors contribute to “liking” a story, and—most importantly—how people vary in this respect. We found that readers like stories because they find them interesting, amusing, suspenseful and/or beautiful. However, the degree to which these components of appreciation were related to how much readers liked stories differed between individuals. Interestingly, the individual slopes of the relationships between many of the components and liking were (positively or negatively) correlated. This indicated, for instance, that individuals displaying a relatively strong relationship between interest and liking, generally display a relatively weak relationship between sadness and liking. The individual differences in the strengths of the relationships between the components and liking were not related to individual differences in expertize, a characteristic strongly associated with aesthetic appreciation of visual art. Our work illustrates that it is important to take into consideration the fact that individuals differ in how they arrive at their evaluation of literary stories, and that it is possible to quantify these differences in empirical experiments. Our work suggests that future research should be careful about “overfitting” theories of aesthetic appreciation to an “idealized reader,” but rather take into consideration variations across individuals in the reason for liking a particular story.
  • Mak, M., & Willems, R. M. (2019). Mental simulation during literary reading: Individual differences revealed with eye-tracking. Language, Cognition and Neuroscience, 34(4), 511-535. doi:10.1080/23273798.2018.1552007.

    Abstract

    People engage in simulation when reading literary narratives. In this study, we tried to pinpoint how different kinds of simulation (perceptual and motor simulation, mentalising) affect reading behaviour. Eye-tracking (gaze durations, regression probability) and questionnaire data were collected from 102 participants, who read three literary short stories. In a pre-test, 90 additional participants indicated which parts of the stories were high in one of the three kinds of simulation-eliciting content. The results show that motor simulation reduces gaze duration (faster reading), whereas perceptual simulation and mentalising increase gaze duration (slower reading). Individual differences in the effect of simulation on gaze duration were found, which were related to individual differences in aspects of story world absorption and story appreciation. These findings suggest fundamental differences between different kinds of simulation and confirm the role of simulation in absorption and appreciation.
  • Mantegna, F., Hintz, F., Ostarek, M., Alday, P. M., & Huettig, F. (2019). Distinguishing integration and prediction accounts of ERP N400 modulations in language processing through experimental design. Neuropsychologia, 134: 107199. doi:10.1016/j.neuropsychologia.2019.107199.

    Abstract

    Prediction of upcoming input is thought to be a main characteristic of language processing (e.g. Altmann & Mirkovic, 2009; Dell & Chang, 2014; Federmeier, 2007; Ferreira & Chantavarin, 2018; Pickering & Gambi, 2018; Hale, 2001; Hickok, 2012; Huettig 2015; Kuperberg & Jaeger, 2016; Levy, 2008; Norris, McQueen, & Cutler, 2016; Pickering & Garrod, 2013; Van Petten & Luka, 2012). One of the main pillars of experimental support for this notion comes from studies that have attempted to measure electrophysiological markers of prediction when participants read or listened to sentences ending in highly predictable words. The N400, a negative-going and centro-parietally distributed component of the ERP occurring approximately 400ms after (target) word onset, has been frequently interpreted as indexing prediction of the word (or the semantic representations and/or the phonological form of the predicted word, see Kutas & Federmeier, 2011; Nieuwland, 2019; Van Petten & Luka, 2012; for review). A major difficulty for interpreting N400 effects in language processing however is that it has been difficult to establish whether N400 target word modulations conclusively reflect prediction rather than (at least partly) ease of integration. In the present exploratory study, we attempted to distinguish lexical prediction (i.e. ‘top-down’ activation) from lexical integration (i.e. ‘bottom-up’ activation) accounts of ERP N400 modulations in language processing.
  • Marcoux, K., Cooke, M., Tucker, B. V., & Ernestus, M. (2022). The Lombard intelligibility benefit of native and non-native speech for native and non-native listeners. Speech Communication, 136, 53-62. doi:10.1016/j.specom.2021.11.007.

    Abstract

    Speech produced in noise (Lombard speech) is more intelligible than speech produced in quiet (plain speech). Previous research on the Lombard intelligibility benefit focused almost entirely on how native speakers produce and perceive Lombard speech. In this study, we investigate the size of the Lombard intelligibility benefit of both native (American-English) and non-native (native Dutch) English for native and non-native listeners (Dutch and Spanish). We used a glimpsing metric to measure the energetic masking potential of speech, which predicted that both native and non-native Lombard speech could withstand greater amounts of masking to a similar extent, compared to plain speech. In an intelligibility experiment, native English, Spanish, and Dutch listeners listened to the same words, mixed with noise. While the non-native listeners appeared to benefit more from Lombard speech than the native listeners did, each listener group experienced a similar benefit for native and non-native Lombard speech. Energetic masking, as captured by the glimpsing metric, only accounted for part of the Lombard benefit, indicating that the Lombard intelligibility benefit does not only result from a shift in spectral distribution. Despite subtle native language influences on non-native Lombard speech, both native and non-native speech provides a Lombard benefit.
  • Martin, A. E., & Baggio, G. (2019). Modeling meaning composition from formalism to mechanism. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20190298. doi:10.1098/rstb.2019.0298.

    Abstract

    Human thought and language have extraordinary expressive power because meaningful parts can be assembled into more complex semantic structures. This partly underlies our ability to compose meanings into endlessly novel configurations, and sets us apart from other species and current computing devices. Crucially, human behaviour, including language use and linguistic data, indicates that composing parts into complex structures does not threaten the existence of constituent parts as independent units in the system: parts and wholes exist simultaneously yet independently from one another in the mind and brain. This independence is evident in human behaviour, but it seems at odds with what is known about the brain's exquisite sensitivity to statistical patterns: everyday language use is productive and expressive precisely because it can go beyond statistical regularities. Formal theories in philosophy and linguistics explain this fact by assuming that language and thought are compositional: systems of representations that separate a variable (or role) from its values (fillers), such that the meaning of a complex expression is a function of the values assigned to the variables. The debate on whether and how compositional systems could be implemented in minds, brains and machines remains vigorous. However, it has not yet resulted in mechanistic models of semantic composition: how, then, are the constituents of thoughts and sentences put and held together? We review and discuss current efforts at understanding this problem, and we chart possible routes for future research.
  • Martin, A. E., & Doumas, L. A. A. (2019). Tensors and compositionality in neural systems. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375(1791): 20190306. doi:10.1098/rstb.2019.0306.

    Abstract

    Neither neurobiological nor process models of meaning composition specify the operator through which constituent parts are bound together into compositional structures. In this paper, we argue that a neurophysiological computation system cannot achieve the compositionality exhibited in human thought and language if it were to rely on a multiplicative operator to perform binding, as the tensor product (TP)-based systems that have been widely adopted in cognitive science, neuroscience and artificial intelligence do. We show via simulation and two behavioural experiments that TPs violate variable-value independence, but human behaviour does not. Specifically, TPs fail to capture that in the statements fuzzy cactus and fuzzy penguin, both cactus and penguin are predicated by fuzzy(x) and belong to the set of fuzzy things, rendering these arguments similar to each other. Consistent with that thesis, people judged arguments that shared the same role to be similar, even when those arguments themselves (e.g., cacti and penguins) were judged to be dissimilar when in isolation. By contrast, the similarity of the TPs representing fuzzy(cactus) and fuzzy(penguin) was determined by the similarity of the arguments, which in this case approaches zero. Based on these results, we argue that neural systems that use TPs for binding cannot approximate how the human mind and brain represent compositional information during processing. We describe a contrasting binding mechanism that any physiological or artificial neural system could use to maintain independence between a role and its argument, a prerequisite for compositionality and, thus, for instantiating the expressive power of human thought and language in a neural system.

    Additional information

    Supplemental Material
  • Martin, A. E., & Doumas, L. A. A. (2019). Predicate learning in neural systems: Using oscillations to discover latent structure. Current Opinion in Behavioral Sciences, 29, 77-83. doi:10.1016/j.cobeha.2019.04.008.

    Abstract

    Humans learn to represent complex structures (e.g. natural language, music, mathematics) from experience with their environments. Often such structures are latent, hidden, or not encoded in statistics about sensory representations alone. Accounts of human cognition have long emphasized the importance of structured representations, yet the majority of contemporary neural networks do not learn structure from experience. Here, we describe one way that structured, functionally symbolic representations can be instantiated in an artificial neural network. Then, we describe how such latent structures (viz. predicates) can be learned from experience with unstructured data. Our approach exploits two principles from psychology and neuroscience: comparison of representations, and the naturally occurring dynamic properties of distributed computing across neuronal assemblies (viz. neural oscillations). We discuss how the ability to learn predicates from experience, to represent information compositionally, and to extrapolate knowledge to unseen data is core to understanding and modeling the most complex human behaviors (e.g. relational reasoning, analogy, language processing, game play).
  • Martinez-Conde, S., Alexander, R. G., Blum, D., Britton, N., Lipska, B. K., Quirk, G. J., Swiss, J. I., Willems, R. M., & Macknik, S. L. (2019). The storytelling brain: How neuroscience stories help bridge the gap between research and society. The Journal of Neuroscience, 39(42), 8285-8290. doi:10.1523/JNEUROSCI.1180-19.2019.

    Abstract

    Active communication between researchers and society is necessary for the scientific community’s involvement in developing sciencebased
    policies. This need is recognized by governmental and funding agencies that compel scientists to increase their public engagement
    and disseminate research findings in an accessible fashion. Storytelling techniques can help convey science by engaging people’s imagination
    and emotions. Yet, many researchers are uncertain about how to approach scientific storytelling, or feel they lack the tools to
    undertake it. Here we explore some of the techniques intrinsic to crafting scientific narratives, as well as the reasons why scientific
    storytellingmaybe an optimal way of communicating research to nonspecialists.Wealso point out current communication gaps between
    science and society, particularly in the context of neurodiverse audiences and those that include neurological and psychiatric patients.
    Present shortcomings may turn into areas of synergy with the potential to link neuroscience education, research, and advocacy
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2019). How the tracking of habitual rate influences speech perception. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(1), 128-138. doi:10.1037/xlm0000579.

    Abstract

    Listeners are known to track statistical regularities in speech. Yet, which temporal cues
    are encoded is unclear. This study tested effects of talker-specific habitual speech rate
    and talker-independent average speech rate (heard over a longer period of time) on
    the perception of the temporal Dutch vowel contrast /A/-/a:/. First, Experiment 1
    replicated that slow local (surrounding) speech contexts induce fewer long /a:/
    responses than faster contexts. Experiment 2 tested effects of long-term habitual
    speech rate. One high-rate group listened to ambiguous vowels embedded in `neutral'
    speech from talker A, intermixed with speech from fast talker B. Another low-rate group
    listened to the same `neutral' speech from talker A, but to talker B being slow.
    Between-group comparison of the `neutral' trials showed that the high-rate group
    demonstrated a lower proportion of /a:/ responses, indicating that talker A's habitual
    speech rate sounded slower when B was faster. In Experiment 3, both talkers
    produced speech at both rates, removing the different habitual speech rates of talker A
    and B, while maintaining the average rate differing between groups. This time no
    global rate effect was observed. Taken together, the present experiments show that a
    talker's habitual rate is encoded relative to the habitual rate of another talker, carrying
    implications for episodic and constraint-based models of speech perception.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2019). Listeners normalize speech for contextual speech rate even without an explicit recognition task. The Journal of the Acoustical Society of America, 146(1), 179-188. doi:10.1121/1.5116004.

    Abstract

    Speech can be produced at different rates. Listeners take this rate variation into account by normalizing vowel duration for contextual speech rate: An ambiguous Dutch word /m?t/ is perceived as short /mAt/ when embedded in a slow context, but long /ma:t/ in a fast context. Whilst some have argued that this rate normalization involves low-level automatic perceptual processing, there is also evidence that it arises at higher-level cognitive processing stages, such as decision making. Prior research on rate-dependent speech perception has only used explicit recognition tasks to investigate the phenomenon, involving both perceptual processing and decision making. This study tested whether speech rate normalization can be observed without explicit decision making, using a cross-modal repetition priming paradigm. Results show that a fast precursor sentence makes an embedded ambiguous prime (/m?t/) sound (implicitly) more /a:/-like, facilitating lexical access to the long target word "maat" in a (explicit) lexical decision task. This result suggests that rate normalization is automatic, taking place even in the absence of an explicit recognition task. Thus, rate normalization is placed within the realm of everyday spoken conversation, where explicit categorization of ambiguous sounds is rare.
  • McConnell, K., & Blumenthal-Dramé, A. (2022). Effects of task and corpus-derived association scores on the online processing of collocations. Corpus Linguistics and Linguistic Theory, 18, 33-76. doi:10.1515/cllt-2018-0030.

    Abstract

    In the following self-paced reading study, we assess the cognitive realism of six widely used corpus-derived measures of association strength between words (collocated modifier–noun combinations like vast majority): MI, MI3, Dice coefficient, T-score, Z-score, and log-likelihood. The ability of these collocation metrics to predict reading times is tested against predictors of lexical processing cost that are widely established in the psycholinguistic and usage-based literature, respectively: forward/backward transition probability and bigram frequency. In addition, the experiment includes the treatment variable of task: it is split into two blocks which only differ in the format of interleaved comprehension questions (multiple choice vs. typed free response). Results show that the traditional corpus-linguistic metrics are outperformed by both backward transition probability and bigram frequency. Moreover, the multiple-choice condition elicits faster overall reading times than the typed condition, and the two winning metrics show stronger facilitation on the critical word (i.e. the noun in the bigrams) in the multiple-choice condition. In the typed condition, we find an effect that is weaker and, in the case of bigram frequency, longer lasting, continuing into the first spillover word. We argue that insufficient attention to task effects might have obscured the cognitive correlates of association scores in earlier research.
  • McCurdy, R., Clough, S., Edwards, M., & Duff, M. (2022). The lesion method: What individual patients can teach us about the brain. Frontiers for Young Minds, 10: 869030. doi:10.3389/frym.2022.869030.

    Abstract

    Scientists who study the brain try to understand how it performs everyday behaviors like language, memory, and emotion. Scientists learn a lot by studying how these behaviors change when the brain is damaged. Over the past 200 years, they have made many discoveries by studying individuals with brain damage. For example, one patient could not form sentences after damaging a specific area of his brain. The scientist who studied him concluded that the damaged brain area was important for producing speech. This approach is called the lesion method, and it has taught us a lot about the brain. In this article, we introduce five patients throughout history who forever changed our understanding of the brain. We describe how researchers use these early discoveries to ask new questions about the brain, and we conclude by discussing how the lesion method is used today.
  • McKone, E., Wan, L., Pidcock, M., Crookes, K., Reynolds, K., Dawel, A., Kidd, E., & Fiorentini, C. (2019). A critical period for faces: Other-race face recognition is improved by childhood but not adult social contact. Scientific Reports, 9: 12820. doi:10.1038/s41598-019-49202-0.

    Abstract

    Poor recognition of other-race faces is ubiquitous around the world. We resolve a longstanding contradiction in the literature concerning whether interracial social contact improves the other-race effect. For the first time, we measure the age at which contact was experienced. taking advantage of
    unusual demographics allowing dissociation of childhood from adult contact, results show sufficient childhood contact eliminated poor other-race recognition altogether (confirming inter-country adoption
    studies). Critically, however, the developmental window for easy acquisition of other-race faces closed by approximately 12 years of age and social contact as an adult — even over several years and involving many other-race friends — produced no improvement. Theoretically, this pattern of developmental change in plasticity mirrors that found in language, suggesting a shared origin grounded in the
    functional importance of both skills to social communication. Practically, results imply that, where parents wish to ensure their offspring develop the perceptual skills needed to recognise other-race people easily, childhood experience should be encouraged: just as an English-speaking person who moves to France as a child (but not an adult) can easily become a native speaker of French, we can easily
    become “native recognisers” of other-race faces via natural social exposure obtained in childhood, but not later
  • Mehta, G., & Cutler, A. (1988). Detection of target phonemes in spontaneous and read speech. Language and Speech, 31, 135-156.

    Abstract

    Although spontaneous speech occurs more frequently in most listeners’ experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalize to the recognition of spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Response were, overall, equally fast in each speech mode. However analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than earlier targets, and targets preceded by long words were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claim from previous work that listeners pay great attention to prosodic information in the process of recognizing speech.
  • Mekki, Y., Guillemot, V., Lemaître, H., Carrión-Castillo, A., Forkel, S. J., Frouin, V., & Philippe, C. (2022). The genetic architecture of language functional connectivity. NeuroImage, 249: 118795. doi:10.1016/j.neuroimage.2021.118795.

    Abstract

    Language is a unique trait of the human species, of which the genetic architecture remains largely unknown. Through language disorders studies, many candidate genes were identified. However, such complex and multifactorial trait is unlikely to be driven by only few genes and case-control studies, suffering from a lack of power, struggle to uncover significant variants. In parallel, neuroimaging has significantly contributed to the understanding of structural and functional aspects of language in the human brain and the recent availability of large scale cohorts like UK Biobank have made possible to study language via image-derived endophenotypes in the general population. Because of its strong relationship with task-based fMRI (tbfMRI) activations and its easiness of acquisition, resting-state functional MRI (rsfMRI) have been more popularised, making it a good surrogate of functional neuronal processes. Taking advantage of such a synergistic system by aggregating effects across spatially distributed traits, we performed a multivariate genome-wide association study (mvGWAS) between genetic variations and resting-state functional connectivity (FC) of classical brain language areas in the inferior frontal (pars opercularis, triangularis and orbitalis), temporal and inferior parietal lobes (angular and supramarginal gyri), in 32,186 participants from UK Biobank. Twenty genomic loci were found associated with language FCs, out of which three were replicated in an independent replication sample. A locus in 3p11.1, regulating EPHA3 gene expression, is found associated with FCs of the semantic component of the language network, while a locus in 15q14, regulating THBS1 gene expression is found associated with FCs of the perceptual-motor language processing, bringing novel insights into the neurobiology of language.
  • Menks, W. M., Ekerdt, C., Janzen, G., Kidd, E., Lemhöfer, K., Fernández, G., & McQueen, J. M. (2022). Study protocol: A comprehensive multi-method neuroimaging approach to disentangle developmental effects and individual differences in second language learning. BMC Psychology, 10: 169. doi:10.1186/s40359-022-00873-x.

    Abstract

    Background

    While it is well established that second language (L2) learning success changes with age and across individuals, the underlying neural mechanisms responsible for this developmental shift and these individual differences are largely unknown. We will study the behavioral and neural factors that subserve new grammar and word learning in a large cross-sectional developmental sample. This study falls under the NWO (Nederlandse Organisatie voor Wetenschappelijk Onderzoek [Dutch Research Council]) Language in Interaction consortium (website: https://www.languageininteraction.nl/).
    Methods

    We will sample 360 healthy individuals across a broad age range between 8 and 25 years. In this paper, we describe the study design and protocol, which involves multiple study visits covering a comprehensive behavioral battery and extensive magnetic resonance imaging (MRI) protocols. On the basis of these measures, we will create behavioral and neural fingerprints that capture age-based and individual variability in new language learning. The behavioral fingerprint will be based on first and second language proficiency, memory systems, and executive functioning. We will map the neural fingerprint for each participant using the following MRI modalities: T1‐weighted, diffusion-weighted, resting-state functional MRI, and multiple functional-MRI paradigms. With respect to the functional MRI measures, half of the sample will learn grammatical features and half will learn words of a new language. Combining all individual fingerprints allows us to explore the neural maturation effects on grammar and word learning.
    Discussion

    This will be one of the largest neuroimaging studies to date that investigates the developmental shift in L2 learning covering preadolescence to adulthood. Our comprehensive approach of combining behavioral and neuroimaging data will contribute to the understanding of the mechanisms influencing this developmental shift and individual differences in new language learning. We aim to answer: (I) do these fingerprints differ according to age and can these explain the age-related differences observed in new language learning? And (II) which aspects of the behavioral and neural fingerprints explain individual differences (across and within ages) in grammar and word learning? The results of this study provide a unique opportunity to understand how the development of brain structure and function influence new language learning success.
  • Menn, K. H., Ward, E., Braukmann, R., Van den Boomen, C., Buitelaar, J., Hunnius, S., & Snijders, T. M. (2022). Neural tracking in infancy predicts language development in children with and without family history of autism. Neurobiology of Language, 3(3), 495-514. doi:10.1162/nol_a_00074.

    Abstract

    During speech processing, neural activity in non-autistic adults and infants tracks the speech envelope. Recent research in adults indicates that this neural tracking relates to linguistic knowledge and may be reduced in autism. Such reduced tracking, if present already in infancy, could impede language development. In the current study, we focused on children with a family history of autism, who often show a delay in first language acquisition. We investigated whether differences in tracking of sung nursery rhymes during infancy relate to language development and autism symptoms in childhood. We assessed speech-brain coherence at either 10 or 14 months of age in a total of 22 infants with high likelihood of autism due to family history and 19 infants without family history of autism. We analyzed the relationship between speech-brain coherence in these infants and their vocabulary at 24 months as well as autism symptoms at 36 months. Our results showed significant speech-brain coherence in the 10- and 14-month-old infants. We found no evidence for a relationship between speech-brain coherence and later autism symptoms. Importantly, speech-brain coherence in the stressed syllable rate (1–3 Hz) predicted later vocabulary. Follow-up analyses showed evidence for a relationship between tracking and vocabulary only in 10-month-olds but not 14-month-olds and indicated possible differences between the likelihood groups. Thus, early tracking of sung nursery rhymes is related to language development in childhood.
  • Merkx, D., & Frank, S. L. (2019). Learning semantic sentence representations from visually grounded language without lexical knowledge. Natural Language Engineering, 25, 451-466. doi:10.1017/S1351324919000196.

    Abstract

    Current approaches to learning semantic representations of sentences often use prior word-level knowledge. The current study aims to leverage visual information in order to capture sentence level semantics without the need for word embeddings. We use a multimodal sentence encoder trained on a corpus of images with matching text captions to produce visually grounded sentence embeddings. Deep Neural Networks are trained to map the two modalities to a common embedding space such that for an image the corresponding caption can be retrieved and vice versa. We show that our model achieves results comparable to the current state of the art on two popular image-caption retrieval benchmark datasets: Microsoft Common Objects in Context (MSCOCO) and Flickr8k. We evaluate the semantic content of the resulting sentence embeddings using the data from the Semantic Textual Similarity (STS) benchmark task and show that the multimodal embeddings correlate well with human semantic similarity judgements. The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics. Importantly, this result shows that we do not need prior knowledge of lexical level semantics in order to model sentence level semantics. These findings demonstrate the importance of visual information in semantics.
  • Meyer, A. S., Roelofs, A., & Brehm, L. (2019). Thirty years of Speaking: An introduction to the special issue. Language, Cognition and Neuroscience, 34(9), 1073-1084. doi:10.1080/23273798.2019.1652763.

    Abstract

    Thirty years ago, Pim Levelt published Speaking. During the 10th International Workshop on Language Production held at the Max Planck Institute for Psycholinguistics in Nijmegen in July 2018, researchers reflected on the impact of the book in the field, developments since its publication, and current research trends. The contributions in this Special Issue are closely related to the presentations given at the workshop. In this editorial, we sketch the research agenda set by Speaking, review how different aspects of this agenda are taken up in the papers in this volume and outline directions for further research.
  • Mickan, A., McQueen, J. M., & Lemhöfer, K. (2019). Bridging the gap between second language acquisition research and memory science: The case of foreign language attrition. Frontiers in Human Neuroscience, 13: 397. doi:10.3389/fnhum.2019.00397.

    Abstract

    The field of second language acquisition (SLA) is by nature of its subject a highly interdisciplinary area of research. Learning a (foreign) language, for example, involves encoding new words, consolidating and committing them to long-term memory, and later retrieving them. All of these processes have direct parallels in the domain of human memory and have been thoroughly studied by researchers in that field. Yet, despite these clear links, the two fields have largely developed in parallel and in isolation from one another. The present paper aims to promote more cross-talk between SLA and memory science. We focus on foreign language (FL) attrition as an example of a research topic in SLA where the parallels with memory science are especially apparent. We discuss evidence that suggests that competition between languages is one of the mechanisms of FL attrition, paralleling the interference process thought to underlie forgetting in other domains of human memory. Backed up by concrete suggestions, we advocate the use of paradigms from the memory literature to study these interference effects in the language domain. In doing so, we hope to facilitate future cross-talk between the two fields, and to further our understanding of FL attrition as a memory phenomenon.
  • Middeldorp, C. M., Felix, J. F., Mahajan, A., EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium, Early Growth Genetics (EGG) consortium, & McCarthy, M. I. (2019). The Early Growth Genetics (EGG) and EArly Genetics and Lifecourse Epidemiology (EAGLE) consortia: Design, results and future prospects. European Journal of Epidemiology, 34(3), 279-300. doi:10.1007/s10654-019-00502-9.

    Abstract

    The impact of many unfavorable childhood traits or diseases, such as low birth weight and mental disorders, is not limited to childhood and adolescence, as they are also associated with poor outcomes in adulthood, such as cardiovascular disease. Insight into the genetic etiology of childhood and adolescent traits and disorders may therefore provide new perspectives, not only on how to improve wellbeing during childhood, but also how to prevent later adverse outcomes. To achieve the sample sizes required for genetic research, the Early Growth Genetics (EGG) and EArly Genetics and Lifecourse Epidemiology (EAGLE) consortia were established. The majority of the participating cohorts are longitudinal population-based samples, but other cohorts with data on early childhood phenotypes are also involved. Cohorts often have a broad focus and collect(ed) data on various somatic and psychiatric traits as well as environmental factors. Genetic variants have been successfully identified for multiple traits, for example, birth weight, atopic dermatitis, childhood BMI, allergic sensitization, and pubertal growth. Furthermore, the results have shown that genetic factors also partly underlie the association with adult traits. As sample sizes are still increasing, it is expected that future analyses will identify additional variants. This, in combination with the development of innovative statistical methods, will provide detailed insight on the mechanisms underlying the transition from childhood to adult disorders. Both consortia welcome new collaborations. Policies and contact details are available from the corresponding authors of this manuscript and/or the consortium websites.
  • Minutjukur, M., Tjitayi, K., Tjitayi, U., & Defina, R. (2019). Pitjantjatjara language change: Some observations and recommendations. Australian Aboriginal Studies, (1), 82-91.
  • Misersky, J., Peeters, D., & Flecken, M. (2022). The potential of immersive virtual reality for the study of event perception. Frontiers in Virtual Reality, 3: 697934. doi:10.3389/frvir.2022.697934.

    Abstract

    In everyday life, we actively engage in different activities from a first-person perspective. However, experimental psychological research in the field of event perception is often limited to relatively passive, third-person computer-based paradigms. In the present study, we tested the feasibility of using immersive virtual reality in combination with eye tracking with participants in active motion. Behavioral research has shown that speakers of aspectual and non-aspectual languages attend to goals (endpoints) in motion events differently, with speakers of non-aspectual languages showing relatively more attention to goals (endpoint bias). In the current study, native speakers of German (non-aspectual) and English (aspectual) walked on a treadmill across 3-D terrains in VR, while their eye gaze was continuously tracked. Participants encountered landmark objects on the side of the road, and potential endpoint objects at the end of it. Using growth curve analysis to analyze fixation patterns over time, we found no differences in eye gaze behavior between German and English speakers. This absence of cross-linguistic differences was also observed in behavioral tasks with the same participants. Methodologically, based on the quality of the data, we conclude that our dynamic eye-tracking setup can be reliably used to study what people look at while moving through rich and dynamic environments that resemble the real world.
  • Misersky, J., Majid, A., & Snijders, T. M. (2019). Grammatical gender in German influences how role-nouns are interpreted: Evidence from ERPs. Discourse Processes, 56(8), 643-654. doi:10.1080/0163853X.2018.1541382.

    Abstract

    Grammatically masculine role-nouns (e.g., Studenten-masc.‘students’) can refer to men and women, but may favor an interpretation where only men are considered the referent. If true, this has implications for a society aiming to achieve equal representation in the workplace since, for example, job adverts use such role descriptions. To investigate the interpretation of role-nouns, the present ERP study assessed grammatical gender processing in German. Twenty participants read sentences where a role-noun (masculine or feminine) introduced a group of people, followed by a congruent (masculine–men, feminine–women) or incongruent (masculine–women, feminine–men) continuation. Both for feminine-men and masculine-women continuations a P600 (500 to 800 ms) was observed; another positivity was already present from 300 to 500 ms for feminine-men continuations, but critically not for masculine-women continuations. The results imply a male-biased rather than gender-neutral interpretation of the masculine—despite widespread usage of the masculine as a gender-neutral form—suggesting masculine forms are inadequate for representing genders equally.
  • Molz, B., Herbik, A., Baseler, H. A., de Best, P. B., Vernon, R. W., Raz, N., Gouws, A. D., Ahmadi, K., Lowndes, R., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B. and 1 moreMolz, B., Herbik, A., Baseler, H. A., de Best, P. B., Vernon, R. W., Raz, N., Gouws, A. D., Ahmadi, K., Lowndes, R., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B., & Morland, A. B. (2022). Structural changes to primary visual cortex in the congenital absence of cone input in achromatopsia. NeuroImage: Clinical, 33: 102925. doi:10.1016/j.nicl.2021.102925.

    Abstract

    Autosomal recessive Achromatopsia (ACHM) is a rare inherited disorder associated with dysfunctional cone photoreceptors resulting in a congenital absence of cone input to visual cortex. This might lead to distinct changes in cortical architecture with a negative impact on the success of gene augmentation therapies. To investigate the status of the visual cortex in these patients, we performed a multi-centre study focusing on the cortical structure of regions that normally receive predominantly cone input. Using high-resolution T1-weighted MRI scans and surface-based morphometry, we compared cortical thickness, surface area and grey matter volume in foveal, parafoveal and paracentral representations of primary visual cortex in 15 individuals with ACHM and 42 normally sighted, healthy controls (HC). In ACHM, surface area was reduced in all tested representations, while thickening of the cortex was found highly localized to the most central representation. These results were comparable to more widespread changes in brain structure reported in congenitally blind individuals, suggesting similar developmental processes, i.e., irrespective of the underlying cause and extent of vision loss. The cortical differences we report here could limit the success of treatment of ACHM in adulthood. Interventions earlier in life when cortical structure is not different from normal would likely offer better visual outcomes for those with ACHM.
  • Monaghan, P., & Fletcher, M. (2019). Do sound symbolism effects for written words relate to individual phonemes or to phoneme features? Language and Cognition, 11(2), 235-255. doi:10.1017/langcog.2019.20.

    Abstract

    The sound of words has been shown to relate to the meaning that the words denote, an effect that extends beyond morphological properties of the word. Studies of these sound-symbolic relations have described this iconicity in terms of individual phonemes, or alternatively due to acoustic properties (expressed in phonological features) relating to meaning. In this study, we investigated whether individual phonemes or phoneme features best accounted for iconicity effects. We tested 92 participants’ judgements about the appropriateness of 320 nonwords presented in written form, relating to 8 different semantic attributes. For all 8 attributes, individual phonemes fitted participants’ responses better than general phoneme features. These results challenge claims that sound-symbolic effects for visually presented words can access broad, cross-modal associations between sound and meaning, instead the results indicate the operation of individual phoneme to meaning relations. Whether similar effects are found for nonwords presented auditorially remains an open question.
  • Monaghan, P., & Roberts, S. G. (2019). Cognitive influences in language evolution: Psycholinguistic predictors of loan word borrowing. Cognition, 186, 147-158. doi:10.1016/j.cognition.2019.02.007.

    Abstract

    Languages change due to social, cultural, and cognitive influences. In this paper, we provide an assessment of these cognitive influences on diachronic change in the vocabulary. Previously, tests of stability and change of vocabulary items have been conducted on small sets of words where diachronic change is imputed from cladistics studies. Here, we show for a substantially larger set of words that stability and change in terms of documented borrowings of words into English and into Dutch can be predicted by psycholinguistic properties of words that reflect their representational fidelity. We found that grammatical category, word length, age of acquisition, and frequency predict borrowing rates, but frequency has a non-linear relationship. Frequency correlates negatively with probability of borrowing for high-frequency words, but positively for low-frequency words. This borrowing evidence documents recent, observable diachronic change in the vocabulary enabling us to distinguish between change associated with transmission during language acquisition and change due to innovations by proficient speakers.
  • Mongelli, V., Meijs, E. L., Van Gaal, S., & Hagoort, P. (2019). No language unification without neural feedback: How awareness affects sentence processing. Neuroimage, 202: 116063. doi:10.1016/j.neuroimage.2019.116063.

    Abstract

    How does the human brain combine a finite number of words to form an infinite variety of sentences? According to the Memory, Unification and Control (MUC) model, sentence processing requires long-range feedback from the left inferior frontal cortex (LIFC) to left posterior temporal cortex (LPTC). Single word processing however may only require feedforward propagation of semantic information from sensory regions to LPTC. Here we tested the claim that long-range feedback is required for sentence processing by reducing visual awareness of words using a masking technique. Masking disrupts feedback processing while leaving feedforward processing relatively intact. Previous studies have shown that masked single words still elicit an N400 ERP effect, a neural signature of semantic incongruency. However, whether multiple words can be combined to form a sentence under reduced levels of awareness is controversial. To investigate this issue, we performed two experiments in which we measured electroencephalography (EEG) while 40 subjects performed a masked priming task. Words were presented either successively or simultaneously, thereby forming a short sentence that could be congruent or incongruent with a target picture. This sentence condition was compared with a typical single word condition. In the masked condition we only found an N400 effect for single words, whereas in the unmasked condition we observed an N400 effect for both unmasked sentences and single words. Our findings suggest that long-range feedback processing is required for sentence processing, but not for single word processing.
  • Montero-Melis, G., Van Paridon, J., Ostarek, M., & Bylund, E. (2022). No evidence for embodiment: The motor system is not needed to keep action words in working memory. Cortex, 150, 108-125. doi:10.1016/j.cortex.2022.02.006.

    Abstract

    Increasing evidence implicates the sensorimotor systems with high-level cognition, but the extent to which these systems play a functional role remains debated. Using an elegant design, Shebani and Pulvermüller (2013) reported that carrying out a demanding rhythmic task with the hands led to selective impairment of working memory for hand-related words (e.g., clap), while carrying out the same task with the feet led to selective memory impairment for foot-related words (e.g., kick). Such a striking double dissociation is acknowledged even by critics to constitute strong evidence for an embodied account of working memory. Here, we report on an attempt at a direct replication of this important finding. We followed a sequential sampling design and stopped data collection at N=77 (more than five times the original sample size), at which point the evidence for the lack of the critical selective interference effect was very strong (BF01 = 91). This finding constitutes strong evidence against a functional contribution of the motor system to keeping action words in working memory. Our finding fits into the larger emerging picture in the field of embodied cognition that sensorimotor simulations are neither required nor automatic in high-level cognitive processes, but that they may play a role depending on the task. Importantly, we urge researchers to engage in transparent, high-powered, and fully pre-registered experiments like the present one to ensure the field advances on a solid basis.
  • Morey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W. and 25 moreMorey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W., Davis, J. D., Evers, E., Girard, S., Harter, D., Hartung, F., Herrera, E., Huettig, F., Humphries, S., Juanchich, M., Kühne, K., Lu, S., Lynes, T., Masson, M. E. J., Ostarek, M., Pessers, S., Reglin, R., Steegen, S., Thiessen, E. D., Thomas, L. E., Trott, S., Vandekerckhove, J., Vanpaemel, W., Vlachou, M., Williams, K., & Ziv-Crispel, N. (2022). A pre-registered, multi-lab non-replication of the Action-sentence Compatibility Effect (ACE). Psychonomic Bulletin & Review, 29, 613-626. doi:10.3758/s13423-021-01927-8.

    Abstract

    The Action-sentence Compatibility Effect (ACE) is a well-known demonstration of the role of motor activity in the comprehension of language. Participants are asked to make sensibility judgments on sentences by producing movements toward the body or away from the body. The ACE is the finding that movements are faster when the direction of the movement (e.g., toward) matches the direction of the action in the to-be-judged sentence (e.g., Art gave you the pen describes action toward you). We report on a pre- registered, multi-lab replication of one version of the ACE. The results show that none of the 18 labs involved in the study observed a reliable ACE, and that the meta-analytic estimate of the size of the ACE was essentially zero.
  • Morgan, T. J. H., Acerbi, A., & Van Leeuwen, E. J. C. (2019). Copy-the-majority of instances or individuals? Two approaches to the majority and their consequences for conformist decision-making. PLoS One, 14(1): e021074. doi:10.1371/journal.pone.0210748.

    Abstract

    Cultural evolution is the product of the psychological mechanisms that underlie individual decision making. One commonly studied learning mechanism is a disproportionate preference for majority opinions, known as conformist transmission. While most theoretical and experimental work approaches the majority in terms of the number of individuals that perform a behaviour or hold a belief, some recent experimental studies approach the majority in terms of the number of instances a behaviour is performed. Here, we use a mathematical model to show that disagreement between these two notions of the majority can arise when behavioural variants are performed at different rates, with different salience or in different contexts (variant overrepresentation) and when a subset of the population act as demonstrators to the whole population (model biases). We also show that because conformist transmission changes the distribution of behaviours in a population, how observers approach the majority can cause populations to diverge, and that this can happen even when the two approaches to the majority agree with regards to which behaviour is in the majority. We discuss these results in light of existing findings, ranging from political extremism on twitter to studies of animal foraging behaviour. We conclude that the factors we considered (variant overrepresentation and model biases) are plausibly widespread. As such, it is important to understand how individuals approach the majority in order to understand the effects of majority influence in cultural evolution.
  • Murphy, E., Woolnough, O., Rollo, P. S., Roccaforte, Z., Segaert, K., Hagoort, P., & Tandon, N. (2022). Minimal phrase composition revealed by intracranial recordings. The Journal of Neuroscience, 42(15), 3216-3227. doi:10.1523/JNEUROSCI.1575-21.2022.

    Abstract

    The ability to comprehend phrases is an essential integrative property of the brain. Here we evaluate the neural processes that enable the transition from single word processing to a minimal compositional scheme. Previous research has reported conflicting timing effects of composition, and disagreement persists with respect to inferior frontal and posterior temporal contributions. To address these issues, 19 patients (10 male, 19 female) implanted with penetrating depth or surface subdural intracranial electrodes heard auditory recordings of adjective-noun, pseudoword-noun and adjective-pseudoword phrases and judged whether the phrase matched a picture. Stimulus-dependent alterations in broadband gamma activity, low frequency power and phase-locking values across the language-dominant left hemisphere were derived. This revealed a mosaic located on the lower bank of the posterior superior temporal sulcus (pSTS), in which closely neighboring cortical sites displayed exclusive sensitivity to either lexicality or phrase structure, but not both. Distinct timings were found for effects of phrase composition (210–300 ms) and pseudoword processing (approximately 300–700 ms), and these were localized to neighboring electrodes in pSTS. The pars triangularis and temporal pole encoded anticipation of composition in broadband low frequencies, and both regions exhibited greater functional connectivity with pSTS during phrase composition. Our results suggest that the pSTS is a highly specialized region comprised of sparsely interwoven heterogeneous constituents that encodes both lower and higher level linguistic features. This hub in pSTS for minimal phrase processing may form the neural basis for the human-specific computational capacity for forming hierarchically organized linguistic structures.
  • Nakamoto, T., Suei, Y., Konishi, M., Kanda, T., Verdonschot, R. G., & Kakimoto, N. (2019). Abnormal positioning of the common carotid artery clinically diagnosed as a submandibular mass. Oral Radiology, 35(3), 331-334. doi:10.1007/s11282-018-0355-7.

    Abstract

    The common carotid artery (CCA) usually runs along the long axis of the neck, although it is occasionally found in an abnormal position or is displaced. We report a case of an 86-year-old woman in whom the CCA was identified in the submandibular area. The patient visited our clinic and reported soft tissue swelling in the right submandibular area. It resembled a tumor mass or a swollen lymph node. Computed tomography showed that it was the right CCA that had been bent forward and was running along the submandibular subcutaneous area. Ultrasonography verified the diagnosis. No other lesions were found on the diagnostic images. Consequently, the patient was diagnosed as having abnormal CCA positioning. Although this condition generally requires no treatment, it is important to follow-up the abnormality with diagnostic imaging because of the risk of cerebrovascular disorders.
  • Nakamoto, T., Taguchi, A., Verdonschot, R. G., & Kakimoto, N. (2019). Improvement of region of interest extraction and scanning method of computer-aided diagnosis system for osteoporosis using panoramic radiographs. Oral Radiology, 35(2), 143-151. doi:10.1007/s11282-018-0330-3.

    Abstract

    ObjectivesPatients undergoing osteoporosis treatment benefit greatly from early detection. We previously developed a computer-aided diagnosis (CAD) system to identify osteoporosis using panoramic radiographs. However, the region of interest (ROI) was relatively small, and the method to select suitable ROIs was labor-intensive. This study aimed to expand the ROI and perform semi-automatized extraction of ROIs. The diagnostic performance and operating time were also assessed.MethodsWe used panoramic radiographs and skeletal bone mineral density data of 200 postmenopausal women. Using the reference point that we defined by averaging 100 panoramic images as the lower mandibular border under the mental foramen, a 400x100-pixel ROI was automatically extracted and divided into four 100x100-pixel blocks. Valid blocks were analyzed using program 1, which examined each block separately, and program 2, which divided the blocks into smaller segments and performed scans/analyses across blocks. Diagnostic performance was evaluated using another set of 100 panoramic images.ResultsMost ROIs (97.0%) were correctly extracted. The operation time decreased to 51.4% for program 1 and to 69.3% for program 2. The sensitivity, specificity, and accuracy for identifying osteoporosis were 84.0, 68.0, and 72.0% for program 1 and 92.0, 62.7, and 70.0% for program 2, respectively. Compared with the previous conventional system, program 2 recorded a slightly higher sensitivity, although it occasionally also elicited false positives.ConclusionsPatients at risk for osteoporosis can be identified more rapidly using this new CAD system, which may contribute to earlier detection and intervention and improved medical care.
  • Nayak, S., Coleman, P. L., Ladányi, E., Nitin, R., Gustavson, D. E., Fisher, S. E., Magne, C. L., & Gordon, R. L. (2022). The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework for understanding musicality-language links across the lifespan. Neurobiology of Language, 3(4), 615-664. doi:10.1162/nol_a_00079.

    Abstract

    Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
  • Nayernia, L., Van den Vijver, R., & Indefrey, P. (2019). The influence of orthography on phonemic knowledge: An experimental investigation on German and Persian. Journal of Psycholinguistic Research, 48(6), 1391-1406. doi:10.1007/s10936-019-09664-9.

    Abstract

    This study investigated whether the phonological representation of a word is modulated by its orthographic representation in case of a mismatch between the two representations. Such a mismatch is found in Persian, where short vowels are represented phonemically but not orthographically. Persian adult literates, Persian adult illiterates, and German adult literates were presented with two auditory tasks, an AX-discrimination task and a reversal task. We assumed that if orthographic representations influence phonological representations, Persian literates should perform worse than Persian illiterates or German literates on items with short vowels in these tasks. The results of the discrimination tasks showed that Persian literates and illiterates as well as German literates were approximately equally competent in discriminating short vowels in Persian words and pseudowords. Persian literates did not well discriminate German words containing phonemes that differed only in vowel length. German literates performed relatively poorly in discriminating German homographic words that differed only in vowel length. Persian illiterates were unable to perform the reversal task in Persian. The results of the other two participant groups in the reversal task showed the predicted poorer performance of Persian literates on Persian items containing short vowels compared to items containing long vowels only. German literates did not show this effect in German. Our results suggest two distinct effects of orthography on phonemic representations: whereas the lack of orthographic representations seems to affect phonemic awareness, homography seems to affect the discriminability of phonemic representations.
  • Nazzi, T., & Cutler, A. (2019). How consonants and vowels shape spoken-language recognition. Annual Review of Linguistics, 5, 25-47. doi:10.1146/annurev-linguistics-011718-011919.

    Abstract

    All languages instantiate a consonant/vowel contrast. This contrast has processing consequences at different levels of spoken-language recognition throughout the lifespan. In adulthood, lexical processing is more strongly associated with consonant than with vowel processing; this has been demonstrated across 13 languages from seven language families and in a variety of auditory lexical-level tasks (deciding whether a spoken input is a word, spotting a real word embedded in a minimal context, reconstructing a word minimally altered into a pseudoword, learning new words or the “words” of a made-up language), as well as in written-word tasks involving phonological processing. In infancy, a consonant advantage in word learning and recognition is found to emerge during development in some languages, though possibly not in others, revealing that the stronger lexicon–consonant association found in adulthood is learned. Current research is evaluating the relative contribution of the early acquisition of the acoustic/phonetic and lexical properties of the native language in the emergence of this association
  • Neumann, A., Nolte, I. M., Pappa, I., Ahluwalia, T. S., Pettersson, E., Rodriguez, A., Whitehouse, A., Van Beijsterveldt, C. E. M., Benyamin, B., Hammerschlag, A. R., Helmer, Q., Karhunen, V., Krapohl, E., Lu, Y., Van der Most, P. J., Palviainen, T., St Pourcain, B., Seppälä, I., Suarez, A., Vilor-Tejedor, N. and 41 moreNeumann, A., Nolte, I. M., Pappa, I., Ahluwalia, T. S., Pettersson, E., Rodriguez, A., Whitehouse, A., Van Beijsterveldt, C. E. M., Benyamin, B., Hammerschlag, A. R., Helmer, Q., Karhunen, V., Krapohl, E., Lu, Y., Van der Most, P. J., Palviainen, T., St Pourcain, B., Seppälä, I., Suarez, A., Vilor-Tejedor, N., Tiesler, C. M. T., Wang, C., Wills, A., Zhou, A., Alemany, S., Bisgaard, H., Bønnelykke, K., Davies, G. E., Hakulinen, C., Henders, A. K., Hyppönen, E., Stokholm, J., Bartels, M., Hottenga, J.-J., Heinrich, J., Hewitt, J., Keltikangas-Järvinen, L., Korhonen, T., Kaprio, J., Lahti, J., Lahti-Pulkkinen, M., Lehtimäki, T., Middeldorp, C. M., Najman, J. M., Pennell, C., Power, C., Oldehinkel, A. J., Plomin, R., Räikkönen, K., Raitakari, O. T., Rimfeld, K., Sass, L., Snieder, H., Standl, M., Sunyer, J., Williams, G. M., Bakermans-Kranenburg, M. J., Boomsma, D. I., Van IJzendoorn, M. H., Hartman, C. A., & Tiemeier, H. (2022). A genome-wide association study of total child psychiatric problems scores. PLOS ONE, 17(8): e0273116. doi:10.1371/journal.pone.0273116.

    Abstract

    Substantial genetic correlations have been reported across psychiatric disorders and numerous cross-disorder genetic variants have been detected. To identify the genetic variants underlying general psychopathology in childhood, we performed a genome-wide association study using a total psychiatric problem score. We analyzed 6,844,199 common SNPs in 38,418 school-aged children from 20 population-based cohorts participating in the EAGLE consortium. The SNP heritability of total psychiatric problems was 5.4% (SE = 0.01) and two loci reached genome-wide significance: rs10767094 and rs202005905. We also observed an association of SBF2, a gene associated with neuroticism in previous GWAS, with total psychiatric problems. The genetic effects underlying the total score were shared with common psychiatric disorders only (attention-deficit/hyperactivity disorder, anxiety, depression, insomnia) (rG > 0.49), but not with autism or the less common adult disorders (schizophrenia, bipolar disorder, or eating disorders) (rG < 0.01). Importantly, the total psychiatric problem score also showed at least a moderate genetic correlation with intelligence, educational attainment, wellbeing, smoking, and body fat (rG > 0.29). The results suggest that many common genetic variants are associated with childhood psychiatric symptoms and related phenotypes in general instead of with specific symptoms. Further research is needed to establish causality and pleiotropic mechanisms between related traits.

    Additional information

    Full summary results
  • Niarchou, M., Gustavson, D. E., Sathirapongsasuti, J. F., Anglada-Tort, M., Eising, E., Bell, E., McArthur, E., Straub, P., The 23andMe Research Team, McAuley, J. D., Capra, J. A., Ullén, F., Creanza, N., Mosing, M. A., Hinds, D., Davis, L. K., Jacoby, N., & Gordon, R. L. (2022). Genome-wide association study of musical beat synchronization demonstrates high polygenicity. Nature Human Behaviour, 6(9), 1292-1309. doi:10.1038/s41562-022-01359-x.

    Abstract

    Moving in synchrony to the beat is a fundamental component of musicality. Here we conducted a genome-wide association study to identify common genetic variants associated with beat synchronization in 606,825 individuals. Beat synchronization exhibited a highly polygenic architecture, with 69 loci reaching genome-wide significance (P < 5 × 10−8) and single-nucleotide-polymorphism-based heritability (on the liability scale) of 13%–16%. Heritability was enriched for genes expressed in brain tissues and for fetal and adult brain-specific gene regulatory elements, underscoring the role of central-nervous-system-expressed genes linked to the genetic basis of the trait. We performed validations of the self-report phenotype (through separate experiments) and of the genome-wide association study (polygenic scores for beat synchronization were associated with patients algorithmically classified as musicians in medical records of a separate biobank). Genetic correlations with breathing function, motor function, processing speed and chronotype suggest shared genetic architecture with beat synchronization and provide avenues for new phenotypic and genetic explorations.

    Additional information

    supplementary information
  • Niermann, H. C. M., Tyborowska, A., Cillessen, A. H. N., Van Donkelaar, M. M. J., Lammertink, F., Gunnar, M. R., Franke, B., Figner, B., & Roelofs, K. (2019). The relation between infant freezing and the development of internalizing symptoms in adolescence: A prospective longitudinal study. Developmental Science, 22(3): e12763. doi:10.1111/desc.12763.

    Abstract

    Given the long-lasting detrimental effects of internalizing symptoms, there is great need for detecting early risk markers. One promising marker is freezing behavior. Whereas initial freezing reactions are essential for coping with threat, prolonged freezing has been associated with internalizing psychopathology. However, it remains unknown whether early life alterations in freezing reactions predict changes in internalizing symptoms during adolescent development. In a longitudinal study (N = 116), we tested prospectively whether observed freezing in infancy predicted the development of internalizing symptoms from childhood through late adolescence (until age 17). Both longer and absent infant freezing behavior during a standard challenge (robot-confrontation task) were associated with internalizing symptoms in adolescence. Specifically, absent infant freezing predicted a relative increase in internalizing symptoms consistently across development from relatively low symptom levels in childhood to relatively high levels in late adolescence. Longer infant freezing also predicted a relative increase in internalizing symptoms, but only up until early adolescence. This latter effect was moderated by peer stress and was followed by a later decrease in internalizing symptoms. The findings suggest that early deviations in defensive freezing responses signal risk for internalizing symptoms and may constitute important markers in future stress vulnerability and resilience studies.
  • Nieuwland, M. S., Coopmans, C. W., & Sommers, R. P. (2019). Distinguishing old from new referents during discourse comprehension: Evidence from ERPs and oscillations. Frontiers in Human Neuroscience, 13: 398. doi:10.3389/fnhum.2019.00398.

    Abstract

    In this EEG study, we used pre-registered and exploratory ERP and time-frequency analyses to investigate the resolution of anaphoric and non-anaphoric noun phrases during discourse comprehension. Participants listened to story contexts that described two antecedents, and subsequently read a target sentence with a critical noun phrase that lexically matched one antecedent (‘old’), matched two antecedents (‘ambiguous’), partially matched one antecedent in terms of semantic features (‘partial-match’), or introduced another referent (non-anaphoric, ‘new’). After each target sentence, participants judged whether the noun referred back to an antecedent (i.e., an ‘old/new’ judgment), which was easiest for ambiguous nouns and hardest for partially matching nouns. The noun-elicited N400 ERP component demonstrated initial sensitivity to repetition and semantic overlap, corresponding to repetition and semantic priming effects, respectively. New and partially matching nouns both elicited a subsequent frontal positivity, which suggested that partially matching anaphors may have been processed as new nouns temporarily. ERPs in an even later time window and ERPs time-locked to sentence-final words suggested that new and partially matching nouns had different effects on comprehension, with partially matching nouns incurring additional processing costs up to the end of the sentence. In contrast to the ERP results, the time-frequency results primarily demonstrated sensitivity to noun repetition, and did not differentiate partially matching anaphors from new nouns. In sum, our results show the ERP and time-frequency effects of referent repetition during discourse comprehension, and demonstrate the potentially demanding nature of establishing the anaphoric meaning of a novel noun.
  • Nieuwland, M. S. (2019). Do ‘early’ brain responses reveal word form prediction during language comprehension? A critical review. Neuroscience and Biobehavioral Reviews, 96, 367-400. doi:10.1016/j.neubiorev.2018.11.019.

    Abstract

    Current theories of language comprehension posit that readers and listeners routinely try to predict the meaning but also the visual or sound form of upcoming words. Whereas
    most neuroimaging studies on word rediction focus on the N400 ERP or its magnetic equivalent, various studies claim that word form prediction manifests itself in ‘early’, pre
    N400 brain responses (e.g., ELAN, M100, P130, N1, P2, N200/PMN, N250). Modulations of these components are often taken as evidence that word form prediction impacts early sensory processes (the sensory hypothesis) or, alternatively, the initial stages of word recognition before word meaning is integrated with sentence context (the recognition hypothesis). Here, I
    comprehensively review studies on sentence- or discourse-level language comprehension that report such effects of prediction on early brain responses. I conclude that the reported evidence for the sensory hypothesis or word recognition hypothesis is weak and inconsistent,
    and highlight the urgent need for replication of previous findings. I discuss the implications and challenges to current theories of linguistic prediction and suggest avenues for future research.
  • Nievergelt, C. M., Maihofer, A. X., Klengel, T., Atkinson, E. G., Chen, C.-Y., Choi, K. W., Coleman, J. R. I., Dalvie, S., Duncan, L. E., Gelernter, J., Levey, D. F., Logue, M. W., Polimanti, R., Provost, A. C., Ratanatharathorn, A., Stein, M. B., Torres, K., Aiello, A. E., Almli, L. M., Amstadter, A. B. and 159 moreNievergelt, C. M., Maihofer, A. X., Klengel, T., Atkinson, E. G., Chen, C.-Y., Choi, K. W., Coleman, J. R. I., Dalvie, S., Duncan, L. E., Gelernter, J., Levey, D. F., Logue, M. W., Polimanti, R., Provost, A. C., Ratanatharathorn, A., Stein, M. B., Torres, K., Aiello, A. E., Almli, L. M., Amstadter, A. B., Andersen, S. B., Andreassen, O. A., Arbisi, P. A., Ashley-Koch, A. E., Austin, S. B., Avdibegovic, E., Babić, D., Bækvad-Hansen, M., Baker, D. G., Beckham, J. C., Bierut, L. J., Bisson, J. I., Boks, M. P., Bolger, E. A., Børglum, A. D., Bradley, B., Brashear, M., Breen, G., Bryant, R. A., Bustamante, A. C., Bybjerg-Grauholm, J., Calabrese, J. R., Caldas- de- Almeida, J. M., Dale, A. M., Daly, M. J., Daskalakis, N. P., Deckert, J., Delahanty, D. L., Dennis, M. F., Disner, S. G., Domschke, K., Dzubur-Kulenovic, A., Erbes, C. R., Evans, A., Farrer, L. A., Feeny, N. C., Flory, J. D., Forbes, D., Franz, C. E., Galea, S., Garrett, M. E., Gelaye, B., Geuze, E., Gillespie, C., Uka, A. G., Gordon, S. D., Guffanti, G., Hammamieh, R., Harnal, S., Hauser, M. A., Heath, A. C., Hemmings, S. M. J., Hougaard, D. M., Jakovljevic, M., Jett, M., Johnson, E. O., Jones, I., Jovanovic, T., Qin, X.-J., Junglen, A. G., Karstoft, K.-I., Kaufman, M. L., Kessler, R. C., Khan, A., Kimbrel, N. A., King, A. P., Koen, N., Kranzler, H. R., Kremen, W. S., Lawford, B. R., Lebois, L. A. M., Lewis, C. E., Linnstaedt, S. D., Lori, A., Lugonja, B., Luykx, J. J., Lyons, M. J., Maples-Keller, J., Marmar, C., Martin, A. R., Martin, N. G., Maurer, D., Mavissakalian, M. R., McFarlane, A., McGlinchey, R. E., McLaughlin, K. A., McLean, S. A., McLeay, S., Mehta, D., Milberg, W. P., Miller, M. W., Morey, R. A., Morris, C. P., Mors, O., Mortensen, P. B., Neale, B. M., Nelson, E. C., Nordentoft, M., Norman, S. B., O’Donnell, M., Orcutt, H. K., Panizzon, M. S., Peters, E. S., Peterson, A. L., Peverill, M., Pietrzak, R. H., Polusny, M. A., Rice, J. P., Ripke, S., Risbrough, V. B., Roberts, A. L., Rothbaum, A. O., Rothbaum, B. O., Roy-Byrne, P., Ruggiero, K., Rung, A., Rutten, B. P. F., Saccone, N. L., Sanchez, S. E., Schijven, D., Seedat, S., Seligowski, A. V., Seng, J. S., Sheerin, C. M., Silove, D., Smith, A. K., Smoller, J. W., Sponheim, S. R., Stein, D. J., Stevens, J. S., Sumner, J. A., Teicher, M. H., Thompson, W. K., Trapido, E., Uddin, M., Ursano, R. J., van den Heuvel, L. L., Van Hooff, M., Vermetten, E., Vinkers, C. H., Voisey, J., Wang, Y., Wang, Z., Werge, T., Williams, M. A., Williamson, D. E., Winternitz, S., Wolf, C., Wolf, E. J., Wolff, J. D., Yehuda, R., Young, R. M., Young, K. A., Zhao, H., Zoellner, L. A., Liberzon, I., Ressler, K. J., Haas, M., & Koenen, K. C. (2019). International meta-analysis of PTSD genome-wide association studies identifies sex- and ancestry-specific genetic risk loci. Nature Communications, 10(1): 4558. doi:10.1038/s41467-019-12576-w.

    Abstract

    The risk of posttraumatic stress disorder (PTSD) following trauma is heritable, but robust common variants have yet to be identified. In a multi-ethnic cohort including over 30,000 PTSD cases and 170,000 controls we conduct a genome-wide association study of PTSD. We demonstrate SNP-based heritability estimates of 5–20%, varying by sex. Three genome-wide significant loci are identified, 2 in European and 1 in African-ancestry analyses. Analyses stratified by sex implicate 3 additional loci in men. Along with other novel genes and non-coding RNAs, a Parkinson’s disease gene involved in dopamine regulation, PARK2, is associated with PTSD. Finally, we demonstrate that polygenic risk for PTSD is significantly predictive of re-experiencing symptoms in the Million Veteran Program dataset, although specific loci did not replicate. These results demonstrate the role of genetic variation in the biology of risk for PTSD and highlight the necessity of conducting sex-stratified analyses and expanding GWAS beyond European ancestry populations.

    Additional information

    Supplementary information
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2022). The use of exemplars differs between native and non-native listening. Bilingualism: Language and Cognition, 25(5), 841-855. doi:10.1017/S1366728922000116.

    Abstract

    This study compares the role of exemplars in native and non-native listening. Two English identity priming experiments were conducted with native English, Dutch non-native, and Spanish non-native listeners. In Experiment 1, primes and targets were spoken in the same or a different voice. Only the native listeners showed exemplar effects. In Experiment 2, primes and targets had the same or a different degree of vowel reduction. The Dutch, but not the Spanish, listeners were familiar with this reduction pattern from their L1 phonology. In this experiment, exemplar effects only arose for the Spanish listeners. We propose that in these lexical decision experiments the use of exemplars is co-determined by listeners’ available processing resources, which is modulated by the familiarity with the variation type from their L1 phonology. The use of exemplars differs between native and non-native listening, suggesting qualitative differences between native and non-native speech comprehension processes.
  • Noble, C., Sala, G., Peter, M., Lingwood, J., Rowland, C. F., Gobet, F., & Pine, J. (2019). The impact of shared book reading on children's language skills: A meta-analysis. Educational Research Review, 28: 100290. doi:10.1016/j.edurev.2019.100290.

    Abstract

    Shared book reading is thought to have a positive impact on young children's language development, with shared reading interventions often run in an attempt to boost children's language skills. However, despite the volume of research in this area, a number of issues remain outstanding. The current meta-analysis explored whether shared reading interventions are equally effective (a) across a range of study designs; (b) across a range of different outcome variables; and (c) for children from different SES groups. It also explored the potentially moderating effects of intervention duration, child age, use of dialogic reading techniques, person delivering the intervention and mode of intervention delivery.

    Our results show that, while there is an effect of shared reading on language development, this effect is smaller than reported in previous meta-analyses (
     = 0.194, p = .002). They also show that this effect is moderated by the type of control group used and is negligible in studies with active control groups (  = 0.028, p = .703). Finally, they show no significant effects of differences in outcome variable (ps ≥ .286), socio-economic status (p = .658), or any of our other potential moderators (ps ≥ .077), and non-significant effects for studies with follow-ups (  = 0.139, p = .200). On the basis of these results, we make a number of recommendations for researchers and educators about the design and implementation of future shared reading interventions.

    Additional information

    Supplementary data
  • Nordlinger, R., Garrido Rodriguez, G., & Kidd, E. (2022). Sentence planning and production in Murrinhpatha, an Australian 'free word order' language. Language, 98(2), 187-220. Retrieved from https://muse.jhu.edu/article/857152.

    Abstract

    Psycholinguistic theories are based on a very small set of unrepresentative languages, so it is as yet unclear how typological variation shapes mechanisms supporting language use. In this article we report the first on-line experimental study of sentence production in an Australian free word order language: Murrinhpatha. Forty-six adult native speakers of Murrinhpatha described a series of unrelated transitive scenes that were manipulated for humanness (±human) in the agent and patient roles while their eye movements were recorded. Speakers produced a large range of word orders, consistent with the language having flexible word order, with variation significantly influenced by agent and patient humanness. An analysis of eye movements showed that Murrinhpatha speakers' first fixation on an event character did not alone determine word order; rather, early in speech planning participants rapidly encoded both event characters and their relationship to each other. That is, they engaged in relational encoding, laying down a very early conceptual foundation for the word order they eventually produced. These results support a weakly hierarchical account of sentence production and show that speakers of a free word order language encode the relationships between event participants during earlier stages of sentence planning than is typically observed for languages with fixed word orders.
  • Norris, D., & Cutler, A. (1988). Speech recognition in French and English. MRC News, 39, 30-31.
  • Norris, D., & Cutler, A. (1988). The relative accessibility of phonemes and syllables. Perception and Psychophysics, 43, 541-550. Retrieved from http://www.psychonomic.org/search/view.cgi?id=8530.

    Abstract

    Previous research comparing detection times for syllables and for phonemes has consistently found that syllables are responded to faster than phonemes. This finding poses theoretical problems for strictly hierarchical models of speech recognition, in which smaller units should be able to be identified faster than larger units. However, inspection of the characteristics of previous experiments’stimuli reveals that subjects have been able to respond to syllables on the basis of only a partial analysis of the stimulus. In the present experiment, five groups of subjects listened to identical stimulus material. Phoneme and syllable monitoring under standard conditions was compared with monitoring under conditions in which near matches of target and stimulus occurred on no-response trials. In the latter case, when subjects were forced to analyze each stimulus fully, phonemes were detected faster than syllables.
  • Nuthmann, A., De Groot, F., Huettig, F., & Olivers, C. L. N. (2019). Extrafoveal attentional capture by object semantics. PLoS One, 14(5): e0217051. doi:10.1371/journal.pone.0217051.

    Abstract

    There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.
  • Ohlerth, A.-K., Bastiaanse, R., Nickels, L., Neu, B., Zhang, W., Ille, S., Sollmann, N., & Krieg, S. M. (2022). Dual-task nTMS mapping to visualize the cortico-subcortical language network and capture postoperative outcome—A patient series in neurosurgery. Frontiers in Oncology, 11: 788122. doi:10.3389/fonc.2021.788122.

    Abstract

    Background: Perioperative assessment of language function in brain tumor patients commonly relies on administration of object naming during stimulation mapping. Ample research, however, points to the benefit of adding verb tasks to the testing paradigm in order to delineate and preserve postoperative language function more comprehensively. This research uses a case series approach to explore the feasibility and added value of a dual-task protocol that includes both a noun task (object naming) and a verb task (action naming) in perioperative delineation of language functions.

    Materials and Methods: Seven neurosurgical cases underwent perioperative language assessment with both object and action naming. This entailed preoperative baseline testing, preoperative stimulation mapping with navigated Transcranial Magnetic Stimulation (nTMS) with subsequent white matter visualization, intraoperative mapping with Direct Electrical Stimulation (DES) in 4 cases, and postoperative imaging and examination of language change.

    Results: We observed a divergent pattern of language organization and decline between cases who showed lesions close to the delineated language network and hence underwent DES mapping, and those that did not. The latter displayed no new impairment postoperatively consistent with an unharmed network for the neural circuits of both object and action naming. For the cases who underwent DES, on the other hand, a higher sensitivity was found for action naming over object naming. Firstly, action naming preferentially predicted the overall language state compared to aphasia batteries. Secondly, it more accurately predicted intraoperative positive language areas as revealed by DES. Thirdly, double dissociations between postoperatively unimpaired object naming and impaired action naming and vice versa indicate segregated skills and neural representation for noun versus verb processing, especially in the ventral stream. Overlaying postoperative imaging with object and action naming networks revealed that dual-task nTMS mapping can explain the drop in performance in those cases where the network appeared in proximity to the resection cavity.

    Conclusion: Using a dual-task protocol for visualization of cortical and subcortical language areas through nTMS mapping proved to be able to capture network-to-deficit relations in our case series. Ultimately, adding action naming to clinical nTMS and DES mapping may help prevent postoperative deficits of this seemingly segregated skill.

    Additional information

    table 1 and table 2
  • Okbay, A., Wu, Y., Wang, N., Jayashankar, H., Bennett, M., Nehzati, S. M., Sidorenko, J., Kweon, H., Goldman, G., Gjorgjieva, T., Jiang, Y., Hicks, B., Tian, C., Hinds, D. A., Ahlskog, R., Magnusson, P. K. E., Oskarsson, S., Hayward, C., Campbell, A., Porteous, D. J. and 18 moreOkbay, A., Wu, Y., Wang, N., Jayashankar, H., Bennett, M., Nehzati, S. M., Sidorenko, J., Kweon, H., Goldman, G., Gjorgjieva, T., Jiang, Y., Hicks, B., Tian, C., Hinds, D. A., Ahlskog, R., Magnusson, P. K. E., Oskarsson, S., Hayward, C., Campbell, A., Porteous, D. J., Freese, J., Herd, P., 23andMe Research Team, Social Science Genetic Association Consortium, Watson, C., Jala, J., Conley, D., Koellinger, P. D., Johannesson, M., Laibson, D., Meyer, M. N., Lee, J. J., Kong, A., Yengo, L., Cesarini, D., Turley, P., Visscher, P. M., Beauchamp, J. P., Benjamin, D. J., & Young, A. I. (2022). Polygenic prediction of educational attainment within and between families from genome-wide association analyses in 3 million individuals. Nature Genetics, 54, 437-449. doi:10.1038/s41588-022-01016-z.

    Abstract

    We conduct a genome-wide association study (GWAS) of educational attainment (EA) in a sample of ~3 million individuals and identify 3,952 approximately uncorrelated genome-wide-significant single-nucleotide polymorphisms (SNPs). A genome-wide polygenic predictor, or polygenic index (PGI), explains 12–16% of EA variance and contributes to risk prediction for ten diseases. Direct effects (i.e., controlling for parental PGIs) explain roughly half the PGI’s magnitude of association with EA and other phenotypes. The correlation between mate-pair PGIs is far too large to be consistent with phenotypic assortment alone, implying additional assortment on PGI-associated factors. In an additional GWAS of dominance deviations from the additive model, we identify no genome-wide-significant SNPs, and a separate X-chromosome additive GWAS identifies 57.

    Additional information

    supplementary information
  • O’Meara, C., Kung, S. S., & Majid, A. (2019). The challenge of olfactory ideophones: Reconsidering ineffability from the Totonac-Tepehua perspective. International Journal of American Linguistics, 85(2), 173-212. doi:10.1086/701801.

    Abstract

    Olfactory impressions are said to be ineffable, but little systematic exploration has been done to substantiate this. We explored olfactory language in Huehuetla Tepehua—a Totonac-Tepehua language spoken in Hidalgo, Mexico—which has a large inventory of ideophones, words with sound-symbolic properties used to describe perceptuomotor experiences. A multi-method study found Huehuetla Tepehua has 45 olfactory ideophones, illustrating intriguing sound-symbolic alternation patterns. Elaboration in the olfactory domain is not unique to this language; related Totonac-Tepehua languages also have impressive smell lexicons. Comparison across these languages shows olfactory and gustatory terms overlap in interesting ways, mirroring the physiology of smelling and tasting. However, although cognate taste terms are formally similar, olfactory terms are less so. We suggest the relative instability of smell vocabulary in comparison with those of taste likely results from the more varied olfactory experiences caused by the mutability of smells in different environments.
  • O’Neill, A. C., Uzbas, F., Antognolli, G., Merino, F., Draganova, K., Jäck, A., Zhang, S., Pedini, G., Schessner, J. P., Cramer, K., Schepers, A., Metzger, F., Esgleas, M., Smialowski, P., Guerrini, R., Falk, S., Feederle, R., Freytag, S., Wang, Z., Bahlo, M. O’Neill, A. C., Uzbas, F., Antognolli, G., Merino, F., Draganova, K., Jäck, A., Zhang, S., Pedini, G., Schessner, J. P., Cramer, K., Schepers, A., Metzger, F., Esgleas, M., Smialowski, P., Guerrini, R., Falk, S., Feederle, R., Freytag, S., Wang, Z., Bahlo, M., Jungmann, R., Bagni, C., Borner, G. H. H., Robertson, S. P., Hauck, S. M., & Götz, M. (2022). Spatial centrosome proteome of human neural cells uncovers disease-relevant heterogeneity. Science, 376(6599): eabf9088. doi:10.1126/science.abf9088.

    Abstract

    The centrosome provides an intracellular anchor for the cytoskeleton, regulating cell division, cell migration, and cilia formation. We used spatial proteomics to elucidate protein interaction networks at the centrosome of human induced pluripotent stem cell–derived neural stem cells (NSCs) and neurons. Centrosome-associated proteins were largely cell type–specific, with protein hubs involved in RNA dynamics. Analysis of neurodevelopmental disease cohorts identified a significant overrepresentation of NSC centrosome proteins with variants in patients with periventricular heterotopia (PH). Expressing the PH-associated mutant pre-mRNA-processing factor 6 (PRPF6) reproduced the periventricular misplacement in the developing mouse brain, highlighting missplicing of transcripts of a microtubule-associated kinase with centrosomal location as essential for the phenotype. Collectively, cell type–specific centrosome interactomes explain how genetic variants in ubiquitous proteins may convey brain-specific phenotypes.
  • Onnis, L., Lim, A., Cheung, S., & Huettig, F. (2022). Is the mind inherently predicting? Exploring forward and backward looking in language processing. Cognitive Science, 46(10): e13201. doi:10.1111/cogs.13201.

    Abstract

    Prediction is one characteristic of the human mind. But what does it mean to say the mind is a ’prediction machine’ and inherently forward looking as is frequently claimed? In natural languages, many contexts are not easily predictable in a forward fashion. In English for example many frequent verbs do not carry unique meaning on their own, but instead rely on another word or words that follow them to become meaningful. Upon reading take a the processor often cannot easily predict walk as the next word. But the system can ‘look back’ and integrate walk more easily when it follows take a (e.g., as opposed to make|get|have a walk). In the present paper we provide further evidence for the importance of both forward and backward looking in language processing. In two self-paced reading tasks and an eye-tracking reading task, we found evidence that adult English native speakers’ sensitivity to word forward and backward conditional probability significantly explained variance in reading times over and above psycholinguistic predictors of reading latencies. We conclude that both forward and backward-looking (prediction and integration) appear to be important characteristics of language processing. Our results thus suggest that it makes just as much sense to call the mind an ’integration machine’ which is inherently backward looking.

    Additional information

    Open Data and Open Materials
  • Ortega, G., Schiefner, A., & Ozyurek, A. (2019). Hearing non-signers use their gestures to predict iconic form-meaning mappings at first exposure to sign. Cognition, 191: 103996. doi:10.1016/j.cognition.2019.06.008.

    Abstract

    The sign languages of deaf communities and the gestures produced by hearing people are communicative systems that exploit the manual-visual modality as means of expression. Despite their striking differences they share the property of iconicity, understood as the direct relationship between a symbol and its referent. Here we investigate whether non-signing hearing adults exploit their implicit knowledge of gestures to bootstrap accurate understanding of the meaning of iconic signs they have never seen before. In Study 1 we show that for some concepts gestures exhibit systematic forms across participants, and share different degrees of form overlap with the signs for the same concepts (full, partial, and no overlap). In Study 2 we found that signs with stronger resemblance with signs are more accurately guessed and are assigned higher iconicity ratings by non-signers than signs with low overlap. In addition, when more people produced a systematic gesture resembling a sign, they assigned higher iconicity ratings to that sign. Furthermore, participants had a bias to assume that signs represent actions and not objects. The similarities between some signs and gestures could be explained by deaf signers and hearing gesturers sharing a conceptual substrate that is rooted in our embodied experiences with the world. The finding that gestural knowledge can ease the interpretation of the meaning of novel signs and predicts iconicity ratings is in line with embodied accounts of cognition and the influence of prior knowledge to acquire new schemas. Through these mechanisms we propose that iconic gestures that overlap in form with signs may serve as some type of ‘manual cognates’ that help non-signing adults to break into a new language at first exposure.

    Additional information

    Supplementary Materials
  • Ostarek, M., Joosen, D., Ishag, A., De Nijs, M., & Huettig, F. (2019). Are visual processes causally involved in “perceptual simulation” effects in the sentence-picture verification task? Cognition, 182, 84-94. doi:10.1016/j.cognition.2018.08.017.

    Abstract

    Many studies have shown that sentences implying an object to have a certain shape produce a robust reaction time advantage for shape-matching pictures in the sentence-picture verification task. Typically, this finding has been interpreted as evidence for perceptual simulation, i.e., that access to implicit shape information involves the activation of modality-specific visual processes. It follows from this proposal that disrupting visual processing during sentence comprehension should interfere with perceptual simulation and obliterate the match effect. Here we directly test this hypothesis. Participants listened to sentences while seeing either visual noise that was previously shown to strongly interfere with basic visual processing or a blank screen. Experiments 1 and 2 replicated the match effect but crucially visual noise did not modulate it. When an interference technique was used that targeted high-level semantic processing (Experiment 3) however the match effect vanished. Visual noise specifically targeting high-level visual processes (Experiment 4) only had a minimal effect on the match effect. We conclude that the shape match effect in the sentence-picture verification paradigm is unlikely to rely on perceptual simulation.
  • Ostarek, M., Van Paridon, J., & Montero-Melis, G. (2019). Sighted people’s language is not helpful for blind individuals’ acquisition of typical animal colors. Proceedings of the National Academy of Sciences of the United States of America, 116(44), 21972-21973. doi:10.1073/pnas.1912302116.
  • Ostarek, M., & Huettig, F. (2019). Six challenges for embodiment research. Current Directions in Psychological Science, 28(6), 593-599. doi:10.1177/0963721419866441.

    Abstract

    20 years after Barsalou's seminal perceptual symbols paper (Barsalou, 1999), embodied cognition, the notion that cognition involves simulations of sensory, motor, or affective states, has moved in status from an outlandish proposal advanced by a fringe movement in psychology to a mainstream position adopted by large numbers of researchers in the psychological and cognitive (neuro)sciences. While it has generated highly productive work in the cognitive sciences as a whole, it had a particularly strong impact on research into language comprehension. The view of a mental lexicon based on symbolic word representations, which are arbitrarily linked to sensory aspects of their referents, for example, was generally accepted since the cognitive revolution in the 1950s. This has radically changed. Given the current status of embodiment as a main theory of cognition, it is somewhat surprising that a close look at the state of the affairs in the literature reveals that the debate about the nature of the processes involved in language comprehension is far from settled and key questions remain unanswered. We present several suggestions for a productive way forward.
  • Oswald, J. N., Van Cise, A. M., Dassow, A., Elliott, T., Johnson, M. T., Ravignani, A., & Podos, J. (2022). A collection of best practices for the collection and analysis of bioacoustic data. Applied Sciences, 12(23): 12046. doi:10.3390/app122312046.

    Abstract

    The field of bioacoustics is rapidly developing and characterized by diverse methodologies, approaches and aims. For instance, bioacoustics encompasses studies on the perception of pure tones in meticulously controlled laboratory settings, documentation of species’ presence and activities using recordings from the field, and analyses of circadian calling patterns in animal choruses. Newcomers to the field are confronted with a vast and fragmented literature, and a lack of accessible reference papers or textbooks. In this paper we contribute towards filling this gap. Instead of a classical list of “dos” and “don’ts”, we review some key papers which, we believe, embody best practices in several bioacoustic subfields. In the first three case studies, we discuss how bioacoustics can help identify the ‘who’, ‘where’ and ‘how many’ of animals within a given ecosystem. Specifically, we review cases in which bioacoustic methods have been applied with success to draw inferences regarding species identification, population structure, and biodiversity. In fourth and fifth case studies, we highlight how structural properties in signal evolution can emerge via ecological constraints or cultural transmission. Finally, in a sixth example, we discuss acoustic methods that have been used to infer predator–prey dynamics in cases where direct observation was not feasible. Across all these examples, we emphasize the importance of appropriate recording parameters and experimental design. We conclude by highlighting common best practices across studies as well as caveats about our own overview. We hope our efforts spur a more general effort in standardizing best practices across the subareas we’ve highlighted in order to increase compatibility among bioacoustic studies and inspire cross-pollination across the discipline.
  • Owoyele, B., Trujillo, J. P., De Melo, G., & Pouw, W. (2022). Masked-Piper: Masking personal identities in visual recordings while preserving multimodal information. SoftwareX, 20: 101236. doi:10.1016/j.softx.2022.101236.

    Abstract

    In this increasingly data-rich world, visual recordings of human behavior are often unable to be shared due to concerns about privacy. Consequently, data sharing in fields such as behavioral science, multimodal communication, and human movement research is often limited. In addition, in legal and other non-scientific contexts, privacy-related concerns may preclude the sharing of video recordings and thus remove the rich multimodal context that humans recruit to communicate. Minimizing the risk of identity exposure while preserving critical behavioral information would maximize utility of public resources (e.g., research grants) and time invested in audio–visual​ research. Here we present an open-source computer vision tool that masks the identities of humans while maintaining rich information about communicative body movements. Furthermore, this masking tool can be easily applied to many videos, leveraging computational tools to augment the reproducibility and accessibility of behavioral research. The tool is designed for researchers and practitioners engaged in kinematic and affective research. Application areas include teaching/education, communication and human movement research, CCTV, and legal contexts.

    Additional information

    setup and usage
  • Ozker, M., Doyle, W., Devinsky, O., & Flinker, A. (2022). A cortical network processes auditory error signals during human speech production to maintain fluency. PLoS Biology, 20: e3001493. doi:10.1371/journal.pbio.3001493.

    Abstract

    Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.

    Additional information

    data and code
  • Park, B.-y., Larivière, S., Rodríguez-Cruces, R., Royer, J., Tavakol, S., Wang, Y., Caciagli, L., Caligiuri, M. E., Gambardella, A., Concha, L., Keller, S. S., Cendes, F., Alvim, M. K. M., Yasuda, C., Bonilha, L., Gleichgerrcht, E., Focke, N. K., Kreilkamp, B. A. K., Domin, M., Von Podewils, F. and 66 morePark, B.-y., Larivière, S., Rodríguez-Cruces, R., Royer, J., Tavakol, S., Wang, Y., Caciagli, L., Caligiuri, M. E., Gambardella, A., Concha, L., Keller, S. S., Cendes, F., Alvim, M. K. M., Yasuda, C., Bonilha, L., Gleichgerrcht, E., Focke, N. K., Kreilkamp, B. A. K., Domin, M., Von Podewils, F., Langner, S., Rummel, C., Rebsamen, M., Wiest, R., Martin, P., Kotikalapudi, R., Bender, B., O’Brien, T. J., Law, M., Sinclair, B., Vivash, L., Desmond, P. M., Malpas, C. B., Lui, E., Alhusaini, S., Doherty, C. P., Cavalleri, G. L., Delanty, N., Kälviäinen, R., Jackson, G. D., Kowalczyk, M., Mascalchi, M., Semmelroch, M., Thomas, R. H., Soltanian-Zadeh, H., Davoodi-Bojd, E., Zhang, J., Lenge, M., Guerrini, R., Bartolini, E., Hamandi, K., Foley, S., Weber, B., Depondt, C., Absil, J., Carr, S. J. A., Abela, E., Richardson, M. P., Devinsky, O., Severino, M., Striano, P., Parodi, C., Tortora, D., Hatton, S. N., Vos, S. B., Duncan, J. S., Galovic, M., Whelan, C. D., Bargalló, N., Pariente, J., Conde, E., Vaudano, A. E., Tondelli, M., Meletti, S., Kong, X., Francks, C., Fisher, S. E., Caldairou, B., Ryten, M., Labate, A., Sisodiya, S. M., Thompson, P. M., McDonald, C. R., Bernasconi, A., Bernasconi, N., & Bernhardt, B. C. (2022). Topographic divergence of atypical cortical asymmetry and atrophy patterns in temporal lobe epilepsy. Brain, 145(4), 1285-1298. doi:10.1093/brain/awab417.

    Abstract

    Temporal lobe epilepsy (TLE), a common drug-resistant epilepsy in adults, is primarily a limbic network disorder associated with predominant unilateral hippocampal pathology. Structural MRI has provided an in vivo window into whole-brain grey matter structural alterations in TLE relative to controls, by either mapping (i) atypical inter-hemispheric asymmetry or (ii) regional atrophy. However, similarities and differences of both atypical asymmetry and regional atrophy measures have not been systematically investigated.

    Here, we addressed this gap using the multi-site ENIGMA-Epilepsy dataset comprising MRI brain morphological measures in 732 TLE patients and 1,418 healthy controls. We compared spatial distributions of grey matter asymmetry and atrophy in TLE, contextualized their topographies relative to spatial gradients in cortical microstructure and functional connectivity calculated using 207 healthy controls obtained from Human Connectome Project and an independent dataset containing 23 TLE patients and 53 healthy controls, and examined clinical associations using machine learning.

    We identified a marked divergence in the spatial distribution of atypical inter-hemispheric asymmetry and regional atrophy mapping. The former revealed a temporo-limbic disease signature while the latter showed diffuse and bilateral patterns. Our findings were robust across individual sites and patients. Cortical atrophy was significantly correlated with disease duration and age at seizure onset, while degrees of asymmetry did not show a significant relationship to these clinical variables.

    Our findings highlight that the mapping of atypical inter-hemispheric asymmetry and regional atrophy tap into two complementary aspects of TLE-related pathology, with the former revealing primary substrates in ipsilateral limbic circuits and the latter capturing bilateral disease effects. These findings refine our notion of the neuropathology of TLE and may inform future discovery and validation of complementary MRI biomarkers in TLE.

    Additional information

    awab417_supplementary_data.pdf
  • Pearson, L., & Pouw, W. (2022). Gesture–vocal coupling in Karnatak music performance: A neuro–bodily distributed aesthetic entanglement. Annals of the New York Academy of Sciences, 1515(1), 219-236. doi:10.1111/nyas.14806.

    Abstract

    In many musical styles, vocalists manually gesture while they sing. Coupling between gesture kinematics and vocalization has been examined in speech contexts, but it is an open question how these couple in music making. We examine this in a corpus of South Indian, Karnatak vocal music that includes motion-capture data. Through peak magnitude analysis (linear mixed regression) and continuous time-series analyses (generalized additive modeling), we assessed whether vocal trajectories around peaks in vertical velocity, speed, or acceleration were coupling with changes in vocal acoustics (namely, F0 and amplitude). Kinematic coupling was stronger for F0 change versus amplitude, pointing to F0's musical significance. Acceleration was the most predictive for F0 change and had the most reliable magnitude coupling, showing a one-third power relation. That acceleration, rather than other kinematics, is maximally predictive for vocalization is interesting because acceleration entails force transfers onto the body. As a theoretical contribution, we argue that gesturing in musical contexts should be understood in relation to the physical connections between gesturing and vocal production that are brought into harmony with the vocalists’ (enculturated) performance goals. Gesture–vocal coupling should, therefore, be viewed as a neuro–bodily distributed aesthetic entanglement.

    Additional information

    tables
  • Peeters, D., Vanlangendonck, F., Rüschemeyer, S.-A., & Dijkstra, T. (2019). Activation of the language control network in bilingual visual word recognition. Cortex, 111, 63-73. doi:10.1016/j.cortex.2018.10.012.

    Abstract

    Research into bilingual language production has identified a language control network that subserves control operations when bilinguals produce speech. Here we explore which brain areas are recruited for control purposes in bilingual language comprehension. In two experimental fMRI sessions, Dutch-English unbalanced bilinguals read words that differed in cross-linguistic form and meaning overlap across their two languages. The need for control operations was further manipulated by varying stimulus list composition across the two experimental sessions. We observed activation of the language control network in bilingual language comprehension as a function of both cross-linguistic form and meaning overlap and stimulus list composition. These findings suggest that the language control network is shared across bilingual language production and comprehension. We argue that activation of the language control network in language comprehension allows bilinguals to quickly and efficiently grasp the context-relevant meaning of words.

    Additional information

    1-s2.0-S0010945218303459-mmc1.docx
  • Peeters, D. (2019). Virtual reality: A game-changing method for the language sciences. Psychonomic Bulletin & Review, 26(3), 894-900. doi:10.3758/s13423-019-01571-3.

    Abstract

    This paper introduces virtual reality as an experimental method for the language sciences and provides a review of recent studies using the method to answer fundamental, psycholinguistic research questions. It is argued that virtual reality demonstrates that ecological validity and
    experimental control should not be conceived of as two extremes on a continuum, but rather as two orthogonal factors. Benefits of using virtual reality as an experimental method include that in a virtual environment, as in the real world, there is no artificial spatial divide between participant and stimulus. Moreover, virtual reality experiments do not necessarily have to include a repetitive trial structure or an unnatural experimental task. Virtual agents outperform experimental confederates in terms of the consistency and replicability of their behaviour, allowing for reproducible science across participants and research labs. The main promise of virtual reality as a tool for the experimental language sciences, however, is that it shifts theoretical focus towards the interplay between different modalities (e.g., speech, gesture, eye gaze, facial expressions) in dynamic and communicative real-world environments, complementing studies that focus on one modality (e.g. speech) in isolation.
  • Pereira Soares, S. M., Kupisch, T., & Rothman, J. (2022). Testing potential transfer effects in heritage and adult L2 bilinguals acquiring a mini grammar as an additional language: An ERP approach. Brain Sciences, 12: 669. doi:10.3390/brainsci12050669.

    Abstract

    Models on L3/Ln acquisition differ with respect to how they envisage degree (holistic
    vs. selective transfer of the L1, L2 or both) and/or timing (initial stages vs. development) of how
    the influence of source languages unfolds. This study uses EEG/ERPs to examine these models,
    bringing together two types of bilinguals: heritage speakers (HSs) (Italian-German, n = 15) compared
    to adult L2 learners (L1 German, L2 English, n = 28) learning L3/Ln Latin. Participants were trained
    on a selected Latin lexicon over two sessions and, afterward, on two grammatical properties: case
    (similar between German and Latin) and adjective–noun order (similar between Italian and Latin).
    Neurophysiological findings show an N200/N400 deflection for the HSs in case morphology and a
    P600 effect for the German L2 group in adjectival position. None of the current L3/Ln models predict
    the observed results, which questions the appropriateness of this methodology. Nevertheless, the
    results are illustrative of differences in how HSs and L2 learners approach the very initial stages of
    additional language learning, the implications of which are discussed
  • Pereira Soares, S. M., Prystauka, Y., DeLuca, V., & Rothman, J. (2022). Type of bilingualism conditions individual differences in the oscillatory dynamics of inhibitory control. Frontiers in Human Neuroscience, 16: 910910. doi:10.3389/fnhum.2022.910910.

    Abstract

    The present study uses EEG time-frequency representations (TFRs) with a Flanker task to investigate if and how individual differences in bilingual language experience modulate neurocognitive outcomes (oscillatory dynamics) in two bilingual group types: late bilinguals (L2 learners) and early bilinguals (heritage speakers—HSs). TFRs were computed for both incongruent and congruent trials. The difference between the two (Flanker effect vis-à-vis cognitive interference) was then (1) compared between the HSs and the L2 learners, (2) modeled as a function of individual differences with bilingual experience within each group separately and (3) probed for its potential (a)symmetry between brain and behavioral data. We found no differences at the behavioral and neural levels for the between-groups comparisons. However, oscillatory dynamics (mainly theta increase and alpha suppression) of inhibition and cognitive control were found to be modulated by individual differences in bilingual language experience, albeit distinctly within each bilingual group. While the results indicate adaptations toward differential brain recruitment in line with bilingual language experience variation overall, this does not manifest uniformly. Rather, earlier versus later onset to bilingualism—the bilingual type—seems to constitute an independent qualifier to how individual differences play out.

    Additional information

    supplementary material
  • Perfors, A., & Kidd, E. (2022). The role of stimulus‐specific perceptual fluency in statistical learning. Cognitive Science, 46(2): e13100. doi:10.1111/cogs.13100.

    Abstract

    Humans have the ability to learn surprisingly complicated statistical information in a variety of modalities and situations, often based on relatively little input. These statistical learning (SL) skills appear to underlie many kinds of learning, but despite their ubiquity, we still do not fully understand precisely what SL is and what individual differences on SL tasks reflect. Here, we present experimental work suggesting that at least some individual differences arise from stimulus-specific variation in perceptual fluency: the ability to rapidly or efficiently code and remember the stimuli that SL occurs over. Experiment 1 demonstrates that participants show improved SL when the stimuli are simple and familiar; Experiment 2 shows that this improvement is not evident for simple but unfamiliar stimuli; and Experiment 3 shows that for the same stimuli (Chinese characters), SL is higher for people who are familiar with them (Chinese speakers) than those who are not (English speakers matched on age and education level). Overall, our findings indicate that performance on a standard SL task varies substantially within the same (visual) modality as a function of whether the stimuli involved are familiar or not, independent of stimulus complexity. Moreover, test–retest correlations of performance in an SL task using stimuli of the same level of familiarity (but distinct items) are stronger than correlations across the same task with stimuli of different levels of familiarity. Finally, we demonstrate that SL performance is predicted by an independent measure of stimulus-specific perceptual fluency that contains no SL component at all. Our results suggest that a key component of SL performance may be related to stimulus-specific processing and familiarity.
  • Peter, M. S., & Rowland, C. F. (2019). Aligning developmental and processing accounts of implicit and statistical learning. Topics in Cognitive Science, 11, 555-572. doi:10.1111/tops.12396.

    Abstract

    A long‐standing question in child language research concerns how children achieve mature syntactic knowledge in the face of a complex linguistic environment. A widely accepted view is that this process involves extracting distributional regularities from the environment in a manner that is incidental and happens, for the most part, without the learner's awareness. In this way, the debate speaks to two associated but separate literatures in language acquisition: statistical learning and implicit learning. Both fields have explored this issue in some depth but, at present, neither the results from the infant studies used by the statistical learning literature nor the artificial grammar learning tasks studies from the implicit learning literature can be used to fully explain how children's syntax becomes adult‐like. In this work, we consider an alternative explanation—that children use error‐based learning to become mature syntax users. We discuss this proposal in the light of the behavioral findings from structural priming studies and the computational findings from Chang, Dell, and Bock's (2006) dual‐path model, which incorporates properties from both statistical and implicit learning, and offers an explanation for syntax learning and structural priming using a common error‐based learning mechanism. We then turn our attention to future directions for the field, here suggesting how structural priming might inform the statistical learning and implicit learning literature on the nature of the learning mechanism.
  • Peter, M. S., Durrant, S., Jessop, A., Bidgood, A., Pine, J. M., & Rowland, C. F. (2019). Does speed of processing or vocabulary size predict later language growth in toddlers? Cognitive Psychology, 115: 101238. doi:10.1016/j.cogpsych.2019.101238.

    Abstract

    It is becoming increasingly clear that the way that children acquire cognitive representations
    depends critically on how their processing system is developing. In particular, recent studies
    suggest that individual differences in language processing speed play an important role in explaining
    the speed with which children acquire language. Inconsistencies across studies, however,
    mean that it is not clear whether this relationship is causal or correlational, whether it is
    present right across development, or whether it extends beyond word learning to affect other
    aspects of language learning, like syntax acquisition. To address these issues, the current study
    used the looking-while-listening paradigm devised by Fernald, Swingley, and Pinto (2001) to test
    the speed with which a large longitudinal cohort of children (the Language 0–5 Project) processed
    language at 19, 25, and 31 months of age, and took multiple measures of vocabulary (UKCDI,
    Lincoln CDI, CDI-III) and syntax (Lincoln CDI) between 8 and 37 months of age. Processing
    speed correlated with vocabulary size - though this relationship changed over time, and was
    observed only when there was variation in how well the items used in the looking-while-listening
    task were known. Fast processing speed was a positive predictor of subsequent vocabulary
    growth, but only for children with smaller vocabularies. Faster processing speed did, however,
    predict faster syntactic growth across the whole sample, even when controlling for concurrent
    vocabulary. The results indicate a relatively direct relationship between processing speed and
    syntactic development, but point to a more complex interaction between processing speed, vocabulary
    size and subsequent vocabulary growth.
  • Petras, K., Ten Oever, S., Jacobs, C., & Goffaux, V. (2019). Coarse-to-fine information integration in human vision. NeuroImage, 186, 103-112. doi:10.1016/j.neuroimage.2018.10.086.

    Abstract

    Coarse-to-fine theories of vision propose that the coarse information carried by the low spatial frequencies (LSF) of visual input guides the integration of finer, high spatial frequency (HSF) detail. Whether and how LSF modulates HSF processing in naturalistic broad-band stimuli is still unclear. Here we used multivariate decoding of EEG signals to separate the respective contribution of LSF and HSF to the neural response evoked by broad-band images. Participants viewed images of human faces, monkey faces and phase-scrambled versions that were either broad-band or filtered to contain LSF or HSF. We trained classifiers on EEG scalp-patterns evoked by filtered scrambled stimuli and evaluated the derived models on broad-band scrambled and intact trials. We found reduced HSF contribution when LSF was informative towards image content, indicating that coarse information does guide the processing of fine detail, in line with coarse-to-fine theories. We discuss the potential cortical mechanisms underlying such coarse-to-fine feedback.

    Additional information

    Supplementary figures
  • Pijls, F., Daelemans, W., & Kempen, G. (1987). Artificial intelligence tools for grammar and spelling instruction. Instructional Science, 16(4), 319-336. doi:10.1007/BF00117750.

    Abstract

    In The Netherlands, grammar teaching is an especially important subject in the curriculum of children aged 10-15 for several reasons. However, in spite of all attention and time invested, the results are poor. This article describes the problems and our attempt to overcome them by developing an intelligent computational instructional environment consisting of: a linguistic expert system, containing a module representing grammar and spelling rules and a number of modules to manipulate these rules; a didactic module; and a student interface with special facilities for grammar and spelling. Three prototypes of the functionality are discussed: BOUWSTEEN and COGO, which are programs for constructing and analyzing Dutch sentences; and TDTDT, a program for the conjugation of Dutch verbs.
  • Pijls, F., & Kempen, G. (1987). Kennistechnologische leermiddelen in het grammatica- en spellingonderwijs. Nederlands Tijdschrift voor de Psychologie, 42, 354-363.
  • Poort, E. D., & Rodd, J. M. (2022). Cross-lingual priming of cognates and interlingual homographs from L2 to L1. Glossa Psycholinguistics, 1(1): 11. doi:10.5070/G601147.

    Abstract

    Many word forms exist in multiple languages, and can have either the same meaning (cognates) or a different meaning (interlingual homographs). Previous experiments have shown that processing of interlingual homographs in a bilingual’s second language is slowed down by recent experience with these words in the bilingual’s native language, while processing of cognates can be speeded up (Poort et al., 2016; Poort & Rodd, 2019a). The current experiment replicated Poort and Rodd’s (2019a) Experiment 2 but switched the direction of priming: Dutch–English bilinguals (n = 106) made Dutch semantic relatedness judgements to probes related to cognates (n = 50), interlingual homographs (n = 50) and translation equivalents (n = 50) they had seen 15 minutes previously embedded in English sentences. The current experiment is the first to show that a single encounter with an interlingual homograph in one’s second language can also affect subsequent processing in one’s native language. Cross-lingual priming did not affect the cognates. The experiment also extended Poort and Rodd (2019a)’s finding of a large interlingual homograph inhibition effect in a semantic relatedness task in the participants’ L2 to their L1, but again found no evidence for a cognate facilitation effect in a semantic relatedness task. These findings extend the growing literature that emphasises the high level of interaction in a bilingual’s mental lexicon, by demonstrating the influence of L2 experience on the processing of L1 words. Data, scripts, materials and pre-registration available via https://osf.io/2swyg/?view_only=b2ba2e627f6f4eaeac87edab2b59b236.
  • Poort, E. D., & Rodd, J. M. (2019). A database of Dutch–English cognates, interlingual homographs and translation equivalents. Journal of Cognition, 2(1): 15. doi:10.5334/joc.67.

    Abstract

    To investigate the structure of the bilingual mental lexicon, researchers in the field of bilingualism often use words that exist in multiple languages: cognates (which have the same meaning) and interlingual homographs (which have a different meaning). A high proportion of these studies have investigated language processing in Dutch–English bilinguals. Despite the abundance of research using such materials, few studies exist that have validated such materials. We conducted two rating experiments in which Dutch–English bilinguals rated the meaning, spelling and pronunciation similarity of pairs of Dutch and English words. On the basis of these results, we present a new database of Dutch–English identical cognates (e.g. “wolf”–“wolf”; n = 58), non-identical cognates (e.g. “kat”–“cat”; n = 74), interlingual homographs (e.g. “angel”–“angel”; n = 72) and translation equivalents (e.g. “wortel”–“carrot”; n = 78). The database can be accessed at http://osf.io/tcdxb/.

    Additional information

    database
  • Poort, E. D., & Rodd, J. M. (2019). Towards a distributed connectionist account of cognates and interlingual homographs: Evidence from semantic relatedness tasks. PeerJ, 7: e6725. doi:10.7717/peerj.6725.

    Abstract

    Background

    Current models of how bilinguals process cognates (e.g., “wolf”, which has the same meaning in Dutch and English) and interlingual homographs (e.g., “angel”, meaning “insect’s sting” in Dutch) are based primarily on data from lexical decision tasks. A major drawback of such tasks is that it is difficult—if not impossible—to separate processes that occur during decision making (e.g., response competition) from processes that take place in the lexicon (e.g., lateral inhibition). Instead, we conducted two English semantic relatedness judgement experiments.
    Methods

    In Experiment 1, highly proficient Dutch–English bilinguals (N = 29) and English monolinguals (N = 30) judged the semantic relatedness of word pairs that included a cognate (e.g., “wolf”–“howl”; n = 50), an interlingual homograph (e.g., “angel”–“heaven”; n = 50) or an English control word (e.g., “carrot”–“vegetable”; n = 50). In Experiment 2, another group of highly proficient Dutch–English bilinguals (N = 101) read sentences in Dutch that contained one of those cognates, interlingual homographs or the Dutch translation of one of the English control words (e.g., “wortel” for “carrot”) approximately 15 minutes prior to completing the English semantic relatedness task.
    Results

    In Experiment 1, there was an interlingual homograph inhibition effect of 39 ms only for the bilinguals, but no evidence for a cognate facilitation effect. Experiment 2 replicated these findings and also revealed that cross-lingual long-term priming had an opposite effect on the cognates and interlingual homographs: recent experience with a cognate in Dutch speeded processing of those items 15 minutes later in English but slowed processing of interlingual homographs. However, these priming effects were smaller than previously observed using a lexical decision task.
    Conclusion

    After comparing our results to studies in both the bilingual and monolingual domain, we argue that bilinguals appear to process cognates and interlingual homographs as monolinguals process polysemes and homonyms, respectively. In the monolingual domain, processing of such words is best modelled using distributed connectionist frameworks. We conclude that it is necessary to explore the viability of such a model for the bilingual case.
  • Postema, A., Van Mierlo, H., Bakker, A. B., & Barendse, M. T. (2022). Study-to-sports spillover among competitive athletes: A field study. International Journal of Sport and Exercise Psychology. Advance online publication. doi:10.1080/1612197X.2022.2058054.

    Abstract

    Combining academics and athletics is challenging but important for the psychological and psychosocial development of those involved. However, little is known about how experiences in academics spill over and relate to athletics. Drawing on the enrichment mechanisms proposed by the Work-Home Resources model, we posit that study crafting behaviours are positively related to volatile personal resources, which, in turn, are related to higher athletic achievement. Via structural equation modelling, we examine a path model among 243 student-athletes, incorporating study crafting behaviours and personal resources (i.e., positive affect and study engagement), and self- and coach-rated athletic achievement measured two weeks later. Results show that optimising the academic environment by crafting challenging study demands relates positively to positive affect and study engagement. In turn, positive affect related positively to self-rated athletic achievement, whereas – unexpectedly – study engagement related negatively to coach-rated athletic achievement. Optimising the academic environment through cognitive crafting and crafting social study resources did not relate to athletic outcomes. We discuss how these findings offer new insights into the interplay between academics and athletics.
  • Postema, M., De Marco, M., Colato, E., & Venneri, A. (2019). A study of within-subject reliability of the brain’s default-mode network. Magnetic Resonance Materials in Physics, Biology and Medicine, 32(3), 391-405. doi:10.1007/s10334-018-00732-0.

    Abstract

    Objective

    Resting-state functional magnetic resonance imaging (fMRI) is promising for Alzheimer’s disease (AD). This study aimed to examine short-term reliability of the default-mode network (DMN), one of the main haemodynamic patterns of the brain.
    Materials and methods

    Using a 1.5 T Philips Achieva scanner, two consecutive resting-state fMRI runs were acquired on 69 healthy adults, 62 patients with mild cognitive impairment (MCI) due to AD, and 28 patients with AD dementia. The anterior and posterior DMN and, as control, the visual-processing network (VPN) were computed using two different methodologies: connectivity of predetermined seeds (theory-driven) and dual regression (data-driven). Divergence and convergence in network strength and topography were calculated with paired t tests, global correlation coefficients, voxel-based correlation maps, and indices of reliability.
    Results

    No topographical differences were found in any of the networks. High correlations and reliability were found in the posterior DMN of healthy adults and MCI patients. Lower reliability was found in the anterior DMN and in the VPN, and in the posterior DMN of dementia patients.
    Discussion

    Strength and topography of the posterior DMN appear relatively stable and reliable over a short-term period of acquisition but with some degree of variability across clinical samples.
  • Postema, M., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Busatto Filho, G., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Di Martino, A., Dinstein, I., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Feng, X. and 38 morePostema, M., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Busatto Filho, G., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Di Martino, A., Dinstein, I., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Feng, X., Fitzgerald, J., Floris, D. L., Freitag, C. M., Gallagher, L., Glahn, D. C., Gori, I., Haar, S., Hoekstra, L., Jahanshad, N., Jalbrzikowski, M., Janssen, J., King, J. A., Kong, X., Lazaro, L., Lerch, J. P., Luna, B., Martinho, M. M., McGrath, J., Medland, S. E., Muratori, F., Murphy, C. M., Murphy, D. G. M., O'Hearn, K., Oranje, B., Parellada, M., Puig, O., Retico, A., Rosa, P., Rubia, K., Shook, D., Taylor, M., Tosetti, M., Wallace, G. L., Zhou, F., Thompson, P., Fisher, S. E., Buitelaar, J. K., & Francks, C. (2019). Altered structural brain asymmetry in autism spectrum disorder in a study of 54 datasets. Nature Communications, 10: 4958. doi:10.1038/s41467-019-13005-8.
  • Poulton, V. R., & Nieuwland, M. S. (2022). Can you hear what’s coming? Failure to replicate ERP evidence for phonological prediction. Neurobiology of Language, 3(4), 556 -574. doi:10.1162/nol_a_00078.

    Abstract

    Prediction-based theories of language comprehension assume that listeners predict both the meaning and phonological form of likely upcoming words. In alleged event-related potential (ERP) demonstrations of phonological prediction, prediction-mismatching words elicit a phonological mismatch negativity (PMN), a frontocentral negativity that precedes the centroparietal N400 component. However, classification and replicability of the PMN has proven controversial, with ongoing debate on whether the PMN is a distinct component or merely an early part of the N400. In this electroencephalography (EEG) study, we therefore attempted to replicate the PMN effect and its separability from the N400, using a participant sample size (N = 48) that was more than double that of previous studies. Participants listened to sentences containing either a predictable word or an unpredictable word with/without phonological overlap with the predictable word. Preregistered analyses revealed a widely distributed negative-going ERP in response to unpredictable words in both the early (150–250 ms) and the N400 (300–500 ms) time windows. Bayes factor analysis yielded moderate evidence against a different scalp distribution of the effects in the two time windows. Although our findings do not speak against phonological prediction during sentence comprehension, they do speak against the PMN effect specifically as a marker of phonological prediction mismatch. Instead of an PMN effect, our results demonstrate the early onset of the auditory N400 effect associated with unpredictable words. Our failure to replicate further highlights the risk associated with commonly employed data-contingent analyses (e.g., analyses involving time windows or electrodes that were selected based on visual inspection) and small sample sizes in the cognitive neuroscience of language.
  • Pouw, W., & Holler, J. (2022). Timing in conversation is dynamically adjusted turn by turn in dyadic telephone conversations. Cognition, 222: 105015. doi:10.1016/j.cognition.2022.105015.

    Abstract

    Conversational turn taking in humans involves incredibly rapid responding. The timing mechanisms underpinning such responses have been heavily debated, including questions such as who is doing the timing. Similar to findings on rhythmic tapping to a metronome, we show that floor transfer offsets (FTOs) in telephone conversations are serially dependent, such that FTOs are lag-1 negatively autocorrelated. Finding this serial dependence on a turn-by-turn basis (lag-1) rather than on the basis of two or more turns, suggests a counter-adjustment mechanism operating at the level of the dyad in FTOs during telephone conversations, rather than a more individualistic self-adjustment within speakers. This finding, if replicated, has major implications for models describing turn taking, and confirms the joint, dyadic nature of human conversational dynamics. Future research is needed to see how pervasive serial dependencies in FTOs are, such as for example in richer communicative face-to-face contexts where visual signals affect conversational timing.
  • Pouw, W., & Dixon, J. A. (2022). What you hear and see specifies the perception of a limb-respiratory-vocal act. Proceedings of the Royal Society B: Biological Sciences, 289(1979): 20221026. doi:10.1098/rspb.2022.1026.
  • Pouw, W., Harrison, S. J., & Dixon, J. A. (2022). The importance of visual control and biomechanics in the regulation of gesture-speech synchrony for an individual deprived of proprioceptive feedback of body position. Scientific Reports, 12: 14775. doi:10.1038/s41598-022-18300-x.

    Abstract

    Do communicative actions such as gestures fundamentally differ in their control mechanisms from other actions? Evidence for such fundamental differences comes from a classic gesture-speech coordination experiment performed with a person (IW) with deafferentation (McNeill, 2005). Although IW has lost both his primary source of information about body position (i.e., proprioception) and discriminative touch from the neck down, his gesture-speech coordination has been reported to be largely unaffected, even if his vision is blocked. This is surprising because, without vision, his object-directed actions almost completely break down. We examine the hypothesis that IW’s gesture-speech coordination is supported by the biomechanical effects of gesturing on head posture and speech. We find that when vision is blocked, there are micro-scale increases in gesture-speech timing variability, consistent with IW’s reported experience that gesturing is difficult without vision. Supporting the hypothesis that IW exploits biomechanical consequences of the act of gesturing, we find that: (1) gestures with larger physical impulses co-occur with greater head movement, (2) gesture-speech synchrony relates to larger gesture-concurrent head movements (i.e. for bimanual gestures), (3) when vision is blocked, gestures generate more physical impulse, and (4) moments of acoustic prominence couple more with peaks of physical impulse when vision is blocked. It can be concluded that IW’s gesturing ability is not based on a specialized language-based feedforward control as originally concluded from previous research, but is still dependent on a varied means of recurrent feedback from the body.

    Additional information

    supplementary tables
  • Pouw, W., & Dixon, J. A. (2019). Entrainment and modulation of gesture-speech synchrony under delayed auditory feedback. Cognitive Science, 43(3): e12721. doi:10.1111/cogs.12721.

    Abstract

    Gesture–speech synchrony re-stabilizes when hand movement or speech is disrupted by a delayed
    feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it
    has also been argued from case studies in perceptual–motor pathology that hand gestures are a special
    kind of action that does not require closed-loop re-afferent feedback to maintain synchrony with
    speech. In the current pre-registered within-subject study, we used motion tracking to conceptually
    replicate McNeill’s (1992) classic study on gesture–speech synchrony under normal and 150 ms
    delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending
    McNeill’s original results, we obtain evidence that (a) gesture-speech synchrony is more stable
    under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably
    entrain to the external auditory delay as indicated by a consistent shift in gesture-speech synchrony
    offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are codependent.
    We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive
    system to stabilize rhythmic activity under interfering conditions.

    Additional information

    https://osf.io/pcde3/
  • Pouw, W., Rop, G., De Koning, B., & Paas, F. (2019). The cognitive basis for the split-attention effect. Journal of Experimental Psychology: General, 148(11), 2058-2075. doi:10.1037/xge0000578.

    Abstract

    The split-attention effect entails that learning from spatially separated, but mutually referring information
    sources (e.g., text and picture), is less effective than learning from the equivalent spatially integrated
    sources. According to cognitive load theory, impaired learning is caused by the working memory load
    imposed by the need to distribute attention between the information sources and mentally integrate them.
    In this study, we directly tested whether the split-attention effect is caused by spatial separation per se.
    Spatial distance was varied in basic cognitive tasks involving pictures (Experiment 1) and text–picture
    combinations (Experiment 2; preregistered study), and in more ecologically valid learning materials
    (Experiment 3). Experiment 1 showed that having to integrate two pictorial stimuli at greater distances
    diminished performance on a secondary visual working memory task, but did not lead to slower
    integration. When participants had to integrate a picture and written text in Experiment 2, a greater
    distance led to slower integration of the stimuli, but not to diminished performance on the secondary task.
    Experiment 3 showed that presenting spatially separated (compared with integrated) textual and pictorial
    information yielded fewer integrative eye movements, but this was not further exacerbated when
    increasing spatial distance even further. This effect on learning processes did not lead to differences in
    learning outcomes between conditions. In conclusion, we provide evidence that larger distances between
    spatially separated information sources influence learning processes, but that spatial separation on its
    own is not likely to be the only, nor a sufficient, condition for impacting learning outcomes.

    Files private

    Request files
  • Pouw, W., & Fuchs, S. (2022). Origins of vocal-entangled gesture. Neuroscience and Biobehavioral Reviews, 141: 104836. doi:10.1016/j.neubiorev.2022.104836.

    Abstract

    Gestures during speaking are typically understood in a representational framework: they represent absent or distal states of affairs by means of pointing, resemblance, or symbolic replacement. However, humans also gesture along with the rhythm of speaking, which is amenable to a non-representational perspective. Such a perspective centers on the phenomenon of vocal-entangled gestures and builds on evidence showing that when an upper limb with a certain mass decelerates/accelerates sufficiently, it yields impulses on the body that cascade in various ways into the respiratory–vocal system. It entails a physical entanglement between body motions, respiration, and vocal activities. It is shown that vocal-entangled gestures are realized in infant vocal–motor babbling before any representational use of gesture develops. Similarly, an overview is given of vocal-entangled processes in non-human animals. They can frequently be found in rats, bats, birds, and a range of other species that developed even earlier in the phylogenetic tree. Thus, the origins of human gesture lie in biomechanics, emerging early in ontogeny and running deep in phylogeny.
  • Preisig, B., & Hervais-Adelman, A. (2022). The predictive value of individual electric field modeling for transcranial alternating current stimulation induced brain modulation. Frontiers in Cellular Neuroscience, 16: 818703. doi:10.3389/fncel.2022.818703.

    Abstract

    There is considerable individual variability in the reported effectiveness of non-invasive brain stimulation. This variability has often been ascribed to differences in the neuroanatomy and resulting differences in the induced electric field inside the brain. In this study, we addressed the question whether individual differences in the induced electric field can predict the neurophysiological and behavioral consequences of gamma band tACS. In a within-subject experiment, bi-hemispheric gamma band tACS and sham stimulation was applied in alternating blocks to the participants’ superior temporal lobe, while task-evoked auditory brain activity was measured with concurrent functional magnetic resonance imaging (fMRI) and a dichotic listening task. Gamma tACS was applied with different interhemispheric phase lags. In a recent study, we could show that anti-phase tACS (180° interhemispheric phase lag), but not in-phase tACS (0° interhemispheric phase lag), selectively modulates interhemispheric brain connectivity. Using a T1 structural image of each participant’s brain, an individual simulation of the induced electric field was computed. From these simulations, we derived two predictor variables: maximal strength (average of the 10,000 voxels with largest electric field values) and precision of the electric field (spatial correlation between the electric field and the task evoked brain activity during sham stimulation). We found considerable variability in the individual strength and precision of the electric fields. Importantly, the strength of the electric field over the right hemisphere predicted individual differences of tACS induced brain connectivity changes. Moreover, we found in both hemispheres a statistical trend for the effect of electric field strength on tACS induced BOLD signal changes. In contrast, the precision of the electric field did not predict any neurophysiological measure. Further, neither strength, nor precision predicted interhemispheric integration. In conclusion, we found evidence for the dose-response relationship between individual differences in electric fields and tACS induced activity and connectivity changes in concurrent fMRI. However, the fact that this relationship was stronger in the right hemisphere suggests that the relationship between the electric field parameters, neurophysiology, and behavior may be more complex for bi-hemispheric tACS.
  • Preisig, B., Riecke, L., & Hervais-Adelman, A. (2022). Speech sound categorization: The contribution of non-auditory and auditory cortical regions. NeuroImage, 258: 119375. doi:10.1016/j.neuroimage.2022.119375.

    Abstract

    Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with disambiguating acoustic feature (third formant, F3) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in left perisylvian regions (STG, SMG), left inferior frontal regions (vMC, IFG, AI), left supplementary motor cortex (SMA/pre-SMA), and right motor and somatosensory regions (M1/S1) represent listeners’ syllable report irrespective of stimulus acoustics. Most of these regions are outside of what is traditionally regarded as auditory or phonological processing areas. Our results indicate that the process of speech sound categorization implicates decision-making mechanisms and auditory-motor transformations.

    Additional information

    figures and table
  • Preisig, B., Sjerps, M. J., Kösem, A., & Riecke, L. (2019). Dual-site high-density 4Hz transcranial alternating current stimulation applied over auditory and motor cortical speech areas does not influence auditory-motor mapping. Brain Stimulation, 12(3), 775-777. doi:10.1016/j.brs.2019.01.007.
  • Preisig, B., & Sjerps, M. J. (2019). Hemispheric specializations affect interhemispheric speech sound integration during duplex perception. The Journal of the Acoustical Society of America, 145, EL190-EL196. doi:10.1121/1.5092829.

    Abstract

    The present study investigated whether speech-related spectral information benefits from initially predominant right or left hemisphere processing. Normal hearing individuals categorized speech sounds composed of an ambiguous base (perceptually intermediate between /ga/ and /da/), presented to one ear, and a disambiguating low or high F3 chirp presented to the other ear. Shorter response times were found when the chirp was presented to the left ear than to the right ear (inducing initially right-hemisphere chirp processing), but no between-ear differences in strength of overall integration. The results are in line with the assumptions of a right hemispheric dominance for spectral processing.

    Additional information

    Supplementary material
  • Price, K. M., Wigg, K. G., Eising, E., Feng, Y., Blokland, K., Wilkinson, M., Kerr, E. N., Guger, S. L., Quantitative Trait Working Group of the GenLang Consortium, Fisher, S. E., Lovett, M. W., Strug, L. J., & Barr, C. L. (2022). Hypothesis-driven genome-wide association studies provide novel insights into genetics of reading disabilities. Translational Psychiatry, 12: 495. doi:10.1038/s41398-022-02250-z.

    Abstract

    Reading Disability (RD) is often characterized by difficulties in the phonology of the language. While the molecular mechanisms underlying it are largely undetermined, loci are being revealed by genome-wide association studies (GWAS). In a previous GWAS for word reading (Price, 2020), we observed that top single-nucleotide polymorphisms (SNPs) were located near to or in genes involved in neuronal migration/axon guidance (NM/AG) or loci implicated in autism spectrum disorder (ASD). A prominent theory of RD etiology posits that it involves disturbed neuronal migration, while potential links between RD-ASD have not been extensively investigated. To improve power to identify associated loci, we up-weighted variants involved in NM/AG or ASD, separately, and performed a new Hypothesis-Driven (HD)–GWAS. The approach was applied to a Toronto RD sample and a meta-analysis of the GenLang Consortium. For the Toronto sample (n = 624), no SNPs reached significance; however, by gene-set analysis, the joint contribution of ASD-related genes passed the threshold (p~1.45 × 10–2, threshold = 2.5 × 10–2). For the GenLang Cohort (n = 26,558), SNPs in DOCK7 and CDH4 showed significant association for the NM/AG hypothesis (sFDR q = 1.02 × 10–2). To make the GenLang dataset more similar to Toronto, we repeated the analysis restricting to samples selected for reading/language deficits (n = 4152). In this GenLang selected subset, we found significant association for a locus intergenic between BTG3-C21orf91 for both hypotheses (sFDR q < 9.00 × 10–4). This study contributes candidate loci to the genetics of word reading. Data also suggest that, although different variants may be involved, alleles implicated in ASD risk may be found in the same genes as those implicated in word reading. This finding is limited to the Toronto sample suggesting that ascertainment influences genetic associations.
  • Prystauka, Y., & Lewis, A. G. (2019). The power of neural oscillations to inform sentence comprehension: A linguistic perspective. Language and Linguistics Compass, 13 (9): e12347. doi:10.1111/lnc3.12347.

    Abstract

    The field of psycholinguistics is currently experiencing an explosion of interest in the analysis of neural oscillations—rhythmic brain activity synchronized at different temporal and spatial levels. Given that language comprehension relies on a myriad of processes, which are carried out in parallel in distributed brain networks, there is hope that this methodology might bring the field closer to understanding some of the more basic (spatially and temporally distributed, yet at the same time often overlapping) neural computations that support language function. In this review, we discuss existing proposals linking oscillatory dynamics in different frequency bands to basic neural computations and review relevant theories suggesting associations between band-specific oscillations and higher-level cognitive processes. More or less consistent patterns of oscillatory activity related to certain types of linguistic processing can already be derived from the evidence that has accumulated over the past few decades. The centerpiece of the current review is a synthesis of such patterns grouped by linguistic phenomenon. We restrict our review to evidence linking measures of oscillatory
    power to the comprehension of sentences, as well as linguistically (and/or pragmatically) more complex structures. For each grouping, we provide a brief summary and a table of associated oscillatory signatures that a psycholinguist might expect to find when employing a particular linguistic task. Summarizing across different paradigms, we conclude that a handful of basic neural oscillatory mechanisms are likely recruited in different ways and at different times for carrying out a variety of linguistic computations.
  • Quinn, S., & Kidd, E. (2019). Symbolic play promotes non‐verbal communicative exchange in infant–caregiver dyads. British Journal of Developmental Psychology, 37(1), 33-50. doi:10.1111/bjdp.12251.

    Abstract

    Symbolic play has long been considered a fertile context for communicative development (Bruner, 1983, Child's talk: Learning to use language, Oxford University Press, Oxford; Vygotsky, 1962, Thought and language, MIT Press, Cambridge, MA; Vygotsky, 1978, Mind in society: The development of higher psychological processes. Harvard University Press, Cambridge, MA). In the current study, we examined caregiver–infant interaction during symbolic play and compared it to interaction in a comparable but non‐symbolic context (i.e., ‘functional’ play). Fifty‐four (N = 54) caregivers and their 18‐month‐old infants were observed engaging in 20 min of play (symbolic, functional). Play interactions were coded and compared across play conditions for joint attention (JA) and gesture use. Compared with functional play, symbolic play was characterized by greater frequency and duration of JA and greater gesture use, particularly the use of iconic gestures with an object in hand. The results suggest that symbolic play provides a rich context for the exchange and negotiation of meaning, and thus may contribute to the development of important skills underlying communicative development.
  • Radenkovic, S., Bird, M. J., Emmerzaal, T. L., Wong, S. Y., Felgueira, C., Stiers, K. M., Sabbagh, L., Himmelreich, N., Poschet, G., Windmolders, P., Verheijen, J., Witters, P., Altassan, R., Honzik, T., Eminoglu, T. F., James, P. M., Edmondson, A. C., Hertecant, J., Kozicz, T., Thiel, C. and 5 moreRadenkovic, S., Bird, M. J., Emmerzaal, T. L., Wong, S. Y., Felgueira, C., Stiers, K. M., Sabbagh, L., Himmelreich, N., Poschet, G., Windmolders, P., Verheijen, J., Witters, P., Altassan, R., Honzik, T., Eminoglu, T. F., James, P. M., Edmondson, A. C., Hertecant, J., Kozicz, T., Thiel, C., Vermeersch, P., Cassiman, D., Beamer, L., Morava, E., & Ghesquiere, B. (2019). The metabolic map into the pathomechanism and treatment of PGM1-CDG. American Journal of Human Genetics, 104(5), 835-846. doi:10.1016/j.ajhg.2019.03.003.

    Abstract

    Phosphoglucomutase 1 (PGM1) encodes the metabolic enzyme that interconverts glucose-6-P and glucose-1-P. Mutations in PGM1 cause impairment in glycogen metabolism and glycosylation, the latter manifesting as a congenital disorder of glycosylation (CDG). This unique metabolic defect leads to abnormal N-glycan synthesis in the endoplasmic reticulum (ER) and the Golgi apparatus (GA). On the basis of the decreased galactosylation in glycan chains, galactose was administered to individuals with PGM1-CDG and was shown to markedly reverse most disease-related laboratory abnormalities. The disease and treatment mechanisms, however, have remained largely elusive. Here, we confirm the clinical benefit of galactose supplementation in PGM1-CDG-affected individuals and obtain significant insights into the functional and biochemical regulation of glycosylation. We report here that, by using tracer-based metabolomics, we found that galactose treatment of PGM1-CDG fibroblasts metabolically re-wires their sugar metabolism, and as such replenishes the depleted levels of galactose-1-P, as well as the levels of UDP-glucose and UDP-galactose, the nucleotide sugars that are required for ER- and GA-linked glycosylation, respectively. To this end, we further show that the galactose in UDP-galactose is incorporated into mature, de novo glycans. Our results also allude to the potential of monosaccharide therapy for several other CDG.
  • Räsänen, O., Seshadri, S., Karadayi, J., Riebling, E., Bunce, J., Cristia, A., Metze, F., Casillas, M., Rosemberg, C., Bergelson, E., & Soderstrom, M. (2019). Automatic word count estimation from daylong child-centered recordings in various language environments using language-independent syllabification of speech. Speech Communication, 113, 63-80. doi:10.1016/j.specom.2019.08.005.

    Abstract

    Automatic word count estimation (WCE) from audio recordings can be used to quantify the amount of verbal communication in a recording environment. One key application of WCE is to measure language input heard by infants and toddlers in their natural environments, as captured by daylong recordings from microphones worn by the infants. Although WCE is nearly trivial for high-quality signals in high-resource languages, daylong recordings are substantially more challenging due to the unconstrained acoustic environments and the presence of near- and far-field speech. Moreover, many use cases of interest involve languages for which reliable ASR systems or even well-defined lexicons are not available. A good WCE system should also perform similarly for low- and high-resource languages in order to enable unbiased comparisons across different cultures and environments. Unfortunately, the current state-of-the-art solution, the LENA system, is based on proprietary software and has only been optimized for American English, limiting its applicability. In this paper, we build on existing work on WCE and present the steps we have taken towards a freely available system for WCE that can be adapted to different languages or dialects with a limited amount of orthographically transcribed speech data. Our system is based on language-independent syllabification of speech, followed by a language-dependent mapping from syllable counts (and a number of other acoustic features) to the corresponding word count estimates. We evaluate our system on samples from daylong infant recordings from six different corpora consisting of several languages and socioeconomic environments, all manually annotated with the same protocol to allow direct comparison. We compare a number of alternative techniques for the two key components in our system: speech activity detection and automatic syllabification of speech. As a result, we show that our system can reach relatively consistent WCE accuracy across multiple corpora and languages (with some limitations). In addition, the system outperforms LENA on three of the four corpora consisting of different varieties of English. We also demonstrate how an automatic neural network-based syllabifier, when trained on multiple languages, generalizes well to novel languages beyond the training data, outperforming two previously proposed unsupervised syllabifiers as a feature extractor for WCE.
  • Rasenberg, M., Pouw, W., Özyürek, A., & Dingemanse, M. (2022). The multimodal nature of communicative efficiency in social interaction. Scientific Reports, 12: 19111. doi:10.1038/s41598-022-22883-w.

    Abstract

    How does communicative efficiency shape language use? We approach this question by studying it at the level of the dyad, and in terms of multimodal utterances. We investigate whether and how people minimize their joint speech and gesture efforts in face-to-face interactions, using linguistic and kinematic analyses. We zoom in on other-initiated repair—a conversational microcosm where people coordinate their utterances to solve problems with perceiving or understanding. We find that efforts in the spoken and gestural modalities are wielded in parallel across repair turns of different types, and that people repair conversational problems in the most cost-efficient way possible, minimizing the joint multimodal effort for the dyad as a whole. These results are in line with the principle of least collaborative effort in speech and with the reduction of joint costs in non-linguistic joint actions. The results extend our understanding of those coefficiency principles by revealing that they pertain to multimodal utterance design.

    Additional information

    Data and analysis scripts
  • Rasenberg, M., Özyürek, A., Bögels, S., & Dingemanse, M. (2022). The primacy of multimodal alignment in converging on shared symbols for novel referents. Discourse Processes, 59(3), 209-236. doi:10.1080/0163853X.2021.1992235.

    Abstract

    When people establish shared symbols for novel objects or concepts, they have been shown to rely on the use of multiple communicative modalities as well as on alignment (i.e., cross-participant repetition of communicative behavior). Yet these interactional resources have rarely been studied together, so little is known about if and how people combine multiple modalities in alignment to achieve joint reference. To investigate this, we systematically track the emergence of lexical and gestural alignment in a referential communication task with novel objects. Quantitative analyses reveal that people frequently use a combination of lexical and gestural alignment, and that such multimodal alignment tends to emerge earlier compared to unimodal alignment. Qualitative analyses of the interactional contexts in which alignment emerges reveal how people flexibly deploy lexical and gestural alignment (independently, simultaneously or successively) to adjust to communicative pressures.
  • Ravignani, A., & Garcia, M. (2022). A cross-species framework to identify vocal learning abilities in mammals. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377: 20200394. doi:10.1098/rstb.2020.0394.

    Abstract

    Vocal production learning (VPL) is the experience-driven ability to produce novel vocal signals through imitation or modification of existing vocalizations. A parallel strand of research investigates acoustic allometry, namely how information about body size is conveyed by acoustic signals. Recently, we proposed that deviation from acoustic allometry principles as a result of sexual selection may have been an intermediate step towards the evolution of vocal learning abilities in mammals. Adopting a more hypothesis-neutral stance, here we perform phylogenetic regressions and other analyses further testing a potential link between VPL and being an allometric outlier. We find that multiple species belonging to VPL clades deviate from allometric scaling but in the opposite direction to that expected from size exaggeration mechanisms. In other words, our correlational approach finds an association between VPL and being an allometric outlier. However, the direction of this association, contra our original hypothesis, may indicate that VPL did not necessarily emerge via sexual selection for size exaggeration: VPL clades show higher vocalization frequencies than expected. In addition, our approach allows us to identify species with potential for VPL abilities: we hypothesize that those outliers from acoustic allometry lying above the regression line may be VPL species. Our results may help better understand the cross-species diversity, variability and aetiology of VPL, which among other things is a key underpinning of speech in our species.

    This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.

    Additional information

    Raw data Supplementary material

Share this page