Publications

Displaying 1501 - 1591 of 1591
  • Verhoef, E., Demontis, D., Burgess, S., Shapland, C. Y., Dale, P. S., Okbay, A., Neale, B. M., Faraone, S. V., iPSYCH-Broad-PGC ADHD Consortium, Stergiakouli, E., Davey Smith, G., Fisher, S. E., Borglum, A., & St Pourcain, B. (2019). Disentangling polygenic associations between Attention-Deficit/Hyperactivity Disorder, educational attainment, literacy and language. Translational Psychiatry, 9: 35. doi:10.1038/s41398-018-0324-2.

    Abstract

    Interpreting polygenic overlap between ADHD and both literacy-related and language-related impairments is challenging as genetic associations might be influenced by indirectly shared genetic factors. Here, we investigate genetic overlap between polygenic ADHD risk and multiple literacy-related and/or language-related abilities (LRAs), as assessed in UK children (N ≤ 5919), accounting for genetically predictable educational attainment (EA). Genome-wide summary statistics on clinical ADHD and years of schooling were obtained from large consortia (N ≤ 326,041). Our findings show that ADHD-polygenic scores (ADHD-PGS) were inversely associated with LRAs in ALSPAC, most consistently with reading-related abilities, and explained ≤1.6% phenotypic variation. These polygenic links were then dissected into both ADHD effects shared with and independent of EA, using multivariable regressions (MVR). Conditional on EA, polygenic ADHD risk remained associated with multiple reading and/or spelling abilities, phonemic awareness and verbal intelligence, but not listening comprehension and non-word repetition. Using conservative ADHD-instruments (P-threshold < 5 × 10−8), this corresponded, for example, to a 0.35 SD decrease in pooled reading performance per log-odds in ADHD-liability (P = 9.2 × 10−5). Using subthreshold ADHD-instruments (P-threshold < 0.0015), these effects became smaller, with a 0.03 SD decrease per log-odds in ADHD risk (P = 1.4 × 10−6), although the predictive accuracy increased. However, polygenic ADHD-effects shared with EA were of equal strength and at least equal magnitude compared to those independent of EA, for all LRAs studied, and detectable using subthreshold instruments. Thus, ADHD-related polygenic links with LRAs are to a large extent due to shared genetic effects with EA, although there is evidence for an ADHD-specific association profile, independent of EA, that primarily involves literacy-related impairments.

    Additional information

    41398_2018_324_MOESM1_ESM.docx
  • Verhoef, E., Shapland, C. Y., Fisher, S. E., Dale, P. S., & St Pourcain, B. (2021). The developmental genetic architecture of vocabulary skills during the first three years of life: Capturing emerging associations with later-life reading and cognition. PLoS Genetics, 17(2): e1009144. doi:10.1371/journal.pgen.1009144.

    Abstract

    Individual differences in early-life vocabulary measures are heritable and associated with subsequent reading and cognitive abilities, although the underlying mechanisms are little understood. Here, we (i) investigate the developmental genetic architecture of expressive and receptive vocabulary in early-life and (ii) assess timing of emerging genetic associations with mid-childhood verbal and non-verbal skills. We studied longitudinally assessed early-life vocabulary measures (15–38 months) and later-life verbal and non-verbal skills (7–8 years) in up to 6,524 unrelated children from the population-based Avon Longitudinal Study of Parents and Children (ALSPAC) cohort. We dissected the phenotypic variance of rank-transformed scores into genetic and residual components by fitting multivariate structural equation models to genome-wide genetic-relationship matrices. Our findings show that the genetic architecture of early-life vocabulary involves multiple distinct genetic factors. Two of these genetic factors are developmentally stable and also contribute to genetic variation in mid-childhood skills: One genetic factor emerging with expressive vocabulary at 24 months (path coefficient: 0.32(SE = 0.06)) was also related to later-life reading (path coefficient: 0.25(SE = 0.12)) and verbal intelligence (path coefficient: 0.42(SE = 0.13)), explaining up to 17.9% of the phenotypic variation. A second, independent genetic factor emerging with receptive vocabulary at 38 months (path coefficient: 0.15(SE = 0.07)), was more generally linked to verbal and non-verbal cognitive abilities in mid-childhood (reading path coefficient: 0.57(SE = 0.07); verbal intelligence path coefficient: 0.60(0.10); performance intelligence path coefficient: 0.50(SE = 0.08)), accounting for up to 36.1% of the phenotypic variation and the majority of genetic variance in these later-life traits (≥66.4%). Thus, the genetic foundations of mid-childhood reading and cognitive abilities are diverse. They involve at least two independent genetic factors that emerge at different developmental stages during early language development and may implicate differences in cognitive processes that are already detectable during toddlerhood.

    Additional information

    supporting information
  • Verkerk, A. (2009). A semantic map of secondary predication. In B. Botma, & J. Van Kampen (Eds.), Linguistics in the Netherlands 2009 (pp. 115-126).
  • Vernes, S. C., Kriengwatana, B. P., Beeck, V. C., Fischer, J., Tyack, P. L., Ten Cate, C., & Janik, V. M. (2021). The multi-dimensional nature of vocal learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200236. doi:10.1098/rstb.2020.0236.

    Abstract

    How learning affects vocalizations is a key question in the study of animal
    communication and human language. Parallel efforts in birds and humans
    have taught us much about how vocal learning works on a behavioural
    and neurobiological level. Subsequent efforts have revealed a variety of
    cases among mammals in which experience also has a major influence on
    vocal repertoires. Janik and Slater (Anim. Behav. 60, 1–11. (doi:10.1006/
    anbe.2000.1410)) introduced the distinction between vocal usage and pro-
    duction learning, providing a general framework to categorize how
    different types of learning influence vocalizations. This idea was built on
    by Petkov and Jarvis (Front. Evol. Neurosci. 4, 12. (doi:10.3389/fnevo.2012.
    00012)) to emphasize a more continuous distribution between limited and
    more complex vocal production learners. Yet, with more studies providing
    empirical data, the limits of the initial frameworks become apparent.
    We build on these frameworks to refine the categorization of vocal learning
    in light of advances made since their publication and widespread agreement
    that vocal learning is not a binary trait. We propose a novel classification
    system, based on the definitions by Janik and Slater, that deconstructs
    vocal learning into key dimensions to aid in understanding the mechanisms
    involved in this complex behaviour. We consider how vocalizations can
    change without learning, and a usage learning framework that considers
    context specificity and timing. We identify dimensions of vocal production
    learning, including the copying of auditory models (convergence/
    divergence on model sounds, accuracy of copying), the degree of change
    (type and breadth of learning) and timing (when learning takes place, the
    length of time it takes and how long it is retained). We consider grey
    areas of classification and current mechanistic understanding of these beha-
    viours. Our framework identifies research needs and will help to inform
    neurobiological and evolutionary studies endeavouring to uncover the
    multi-dimensional nature of vocal learning.
    This article is part of the theme issue ‘Vocal learning in animals and
    humans’.
  • Vernes, S. C., Janik, V. M., Fitch, W. T., & Slater, P. J. B. (2021). Vocal learning in animals and humans. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200234. doi:10.1098/rstb.2020.0234.
  • Vernes, S. C., MacDermot, K. D., Monaco, A. P., & Fisher, S. E. (2009). Assessing the impact of FOXP1 mutations on developmental verbal dyspraxia. European Journal of Human Genetics, 17(10), 1354-1358. doi:10.1038/ejhg.2009.43.

    Abstract

    Neurodevelopmental disorders that disturb speech and language are highly heritable. Isolation of the underlying genetic risk factors has been hampered by complexity of the phenotype and potentially large number of contributing genes. One exception is the identification of rare heterozygous mutations of the FOXP2 gene in a monogenic syndrome characterised by impaired sequencing of articulatory gestures, disrupting speech (developmental verbal dyspraxia, DVD), as well as multiple deficits in expressive and receptive language. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerisation. FOXP1, the most closely related member of this subgroup, can directly interact with FOXP2 and is co-expressed in neural structures relevant to speech and language disorders. Moreover, investigations of songbird orthologues indicate that combinatorial actions of the two proteins may play important roles in vocal learning, leading to the suggestion that human FOXP1 should be considered a strong candidate for involvement in DVD. Thus, in this study, we screened the entire coding region of FOXP1 (exons and flanking intronic sequence) for nucleotide changes in a panel of probands used earlier to detect novel mutations in FOXP2. A non-synonymous coding change was identified in a single proband, yielding a proline-to-alanine change (P215A). However, this was also found in a random control sample. Analyses of non-coding SNP changes did not find any correlation with affection status. We conclude that FOXP1 mutations are unlikely to represent a major cause of DVD.

    Additional information

    ejhg200943x1.pdf
  • Vernes, S. C., & Wilkinson, G. S. (2020). Behaviour, biology, and evolution of vocal learning in bats. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375(1789): 20190061. doi:10.1098/rstb.2019.0061.

    Abstract

    The comparative approach can provide insight into the evolution of human speech, language and social communication by studying relevant traits in animal systems. Bats are emerging as a model system with great potential to shed light on these processes given their learned vocalizations, close social interactions, and mammalian brains and physiology. A recent framework outlined the multiple levels of investigation needed to understand vocal learning across a broad range of non-human species, including cetaceans, pinnipeds, elephants, birds and bats. Here, we apply this framework to the current state-of-the-art in bat research. This encompasses our understanding of the abilities bats have displayed for vocal learning, what is known about the timing and social structure needed for such learning, and current knowledge about the prevalence of the trait across the order. It also addresses the biology (vocal tract morphology, neurobiology and genetics) and evolution of this trait. We conclude by highlighting some key questions that should be answered to advance our understanding of the biological encoding and evolution of speech and spoken communication. This article is part of the theme issue 'What can animal communication teach us about human language?'

    Additional information

    earlier version of article on BioRxiv
  • Vernes, S. C. (2019). Neuromolecular approaches to the study of language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 577-593). Cambridge, MA: MIT Press.
  • Vernes, S. C. (2017). What bats have to say about speech and language. Psychonomic Bulletin & Review, 24(1), 111-117. doi:10.3758/s13423-016-1060-3.

    Abstract

    Understanding the biological foundations of language is vital to gaining insight into how the capacity for language may have evolved in humans. Animal models can be exploited to learn about the biological underpinnings of shared human traits, and although no other animals display speech or language, a range of behaviors found throughout the animal kingdom are relevant to speech and spoken language. To date, such investigations have been dominated by studies of our closest primate relatives searching for shared traits, or more distantly related species that are sophisticated vocal communicators, like songbirds. Herein I make the case for turning our attention to the Chiropterans, to shed new light on the biological encoding and evolution of human language-relevant traits. Bats employ complex vocalizations to facilitate navigation as well as social interactions, and are exquisitely tuned to acoustic information. Furthermore, bats display behaviors such as vocal learning and vocal turn-taking that are directly pertinent for human spoken language. Emerging technologies are now allowing the study of bat vocal communication, from the behavioral to the neurobiological and molecular level. Although it is clear that no single animal model can reflect the complexity of human language, by comparing such findings across diverse species we can identify the shared biological mechanisms likely to have influenced the evolution of human language. Keywords
  • Vernes, S. C., & Fisher, S. E. (2009). Unravelling neurogenetic networks implicated in developmental language disorders. Biochemical Society Transactions (London), 37, 1263-1269. doi:10.1042/BST0371263.

    Abstract

    Childhood syndromes disturbing language development are common and display high degrees of heritability. In most cases, the underlying genetic architecture is likely to be complex, involving multiple chromosomal loci and substantial heterogeneity, which makes it difficult to track down the crucial genomic risk factors. Investigation of rare Mendelian phenotypes offers a complementary route for unravelling key neurogenetic pathways. The value of this approach is illustrated by the discovery that heterozygous FOXP2 (where FOX is forkhead box) mutations cause an unusual monogenic disorder, characterized by problems with articulating speech along with deficits in expressive and receptive language. FOXP2 encodes a regulatory protein, belonging to the forkhead box family of transcription factors, known to play important roles in modulating gene expression in development and disease. Functional genetics using human neuronal models suggest that the different FOXP2 isoforms generated by alternative splicing have distinct properties and may act to regulate each other's activity. Such investigations have also analysed the missense and nonsense mutations found in cases of speech and language disorder, showing that they alter intracellular localization, DNA binding and transactivation capacity of the mutated proteins. Moreover, in the brains of mutant mice, aetiological mutations have been found to disrupt the synaptic plasticity of Foxp2-expressing circuitry. Finally, although mutations of FOXP2 itself are rare, the downstream networks which it regulates in the brain appear to be broadly implicated in typical forms of language impairment. Thus, through ongoing identification of regulated targets and interacting co-factors, this gene is providing the first molecular entry points into neural mechanisms that go awry in language-related disorders
  • Versace, E., Rogge, J. R., Shelton-May, N., & Ravignani, A. (2019). Positional encoding in cotton-top tamarins (Saguinus oedipus). Animal Cognition, 22, 825-838. doi:10.1007/s10071-019-01277-y.

    Abstract

    Strategies used in artificial grammar learning can shed light into the abilities of different species to extract regularities from the environment. In the A(X)nB rule, A and B items are linked, but assigned to different positional categories and separated by distractor items. Open questions are how widespread is the ability to extract positional regularities from A(X)nB patterns, which strategies are used to encode positional regularities and whether individuals exhibit preferences for absolute or relative position encoding. We used visual arrays to investigate whether cotton-top tamarins (Saguinusoedipus) can learn this rule and which strategies they use. After training on a subset of exemplars, two of the tested monkeys successfully generalized to novel combinations. These tamarins discriminated between categories of tokens with different properties (A, B, X) and detected a positional relationship between non-adjacent items even in the presence of novel distractors. The pattern of errors revealed that successful subjects used visual similarity with training stimuli to solve the task and that successful tamarins extracted the relative position of As and Bs rather than their absolute position, similarly to what has been observed in other species. Relative position encoding appears to be favoured in different tasks and taxa. Generalization, though, was incomplete, since we observed a failure with items that during training had always been presented in reinforced arrays, showing the limitations in grasping the underlying positional rule. These results suggest the use of local strategies in the extraction of positional rules in cotton-top tamarins.

    Additional information

    Supplementary file
  • Verspeek, J., Staes, N., Van Leeuwen, E. J. C., Eens, M., & Stevens, J. M. G. (2019). Bonobo personality predicts friendship. Scientific Reports, 9: 19245. doi:10.1038/s41598-019-55884-3.

    Abstract

    In bonobos, strong bonds have been documented between unrelated females and between mothers
    and their adult sons, which can have important fitness benefits. Often age, sex or kinship similarity
    have been used to explain social bond strength variation. Recent studies in other species also stress
    the importance of personality, but this relationship remains to be investigated in bonobos. We used
    behavioral observations on 39 adult and adolescent bonobos housed in 5 European zoos to study the
    role of personality similarity in dyadic relationship quality. Dimension reduction analyses on individual
    and dyadic behavioral scores revealed multidimensional personality (Sociability, Openness, Boldness,
    Activity) and relationship quality components (value, compatibility). We show that, aside from
    relatedness and sex combination of the dyad, relationship quality is also associated with personality
    similarity of both partners. While similarity in Sociability resulted in higher relationship values, lower
    relationship compatibility was found between bonobos with similar Activity scores. The results of this
    study expand our understanding of the mechanisms underlying social bond formation in anthropoid
    apes. In addition, we suggest that future studies in closely related species like chimpanzees should
    implement identical methods for assessing bond strength to shed further light on the evolution of this
    phenomenon.

    Additional information

    Supplementary material
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2017). Speaking style influences the brain’s electrophysiological response to grammatical errors in speech comprehension. Journal of Cognitive Neuroscience, 29(7), 1132-1146. doi:10.1162/jocn_a_01095.

    Abstract

    This electrophysiological study asked whether the brain processes grammatical gender
    violations in casual speech differently than in careful speech. Native speakers of Dutch were
    presented with utterances that contained adjective-noun pairs in which the adjective was either
    correctly inflected with a word-final schwa (e.g. een spannende roman “a suspenseful novel”) or
    incorrectly uninflected without that schwa (een spannend roman). Consistent with previous
    findings, the uninflected adjectives elicited an electrical brain response sensitive to syntactic
    violations when the talker was speaking in a careful manner. When the talker was speaking in a
    casual manner, this response was absent. A control condition showed electrophysiological responses
    for carefully as well as casually produced utterances with semantic anomalies, showing that
    listeners were able to understand the content of both types of utterance. The results suggest that
    listeners take information about the speaking style of a talker into account when processing the
    acoustic-phonetic information provided by the speech signal. Absent schwas in casual speech are
    effectively not grammatical gender violations. These changes in syntactic processing are evidence
    of contextually-driven neural flexibility.

    Files private

    Request files
  • De Vignemont, F., Majid, A., Jola, C., & Haggard, P. (2009). Segmenting the body into parts: Evidence from biases in tactile perception. Quarterly Journal of Experimental Psychology, 62, 500-512. doi:10.1080/17470210802000802.

    Abstract

    How do we individuate body parts? Here, we investigated the effect of body segmentation between hand and arm in tactile and visual perception. In a first experiment, we showed that two tactile stimuli felt farther away when they were applied across the wrist than when they were applied within a single body part (palm or forearm), indicating a “category boundary effect”. In the following experiments, we excluded two hypotheses, which attributed tactile segmentation to other, nontactile factors. In Experiment 2, we showed that the boundary effect does not arise from motor cues. The effect was reduced during a motor task involving flexion and extension movements of the wrist joint. Action brings body parts together into functional units, instead of pulling them apart. In Experiments 3 and 4, we showed that the effect does not arise from perceptual cues of visual discontinuities. We did not find any segmentation effect for the visual percept of the body in Experiment 3, nor for a neutral shape in Experiment 4. We suggest that the mental representation of the body is structured in categorical body parts delineated by joints, and that this categorical representation modulates tactile spatial perception.
  • Vogels, J., Howcroft, D. M., Tourtouri, E. N., & Demberg, V. (2020). How speakers adapt object descriptions to listeners under load. Language, Cognition and Neuroscience, 35(1), 78-92. doi:10.1080/23273798.2019.1648839.

    Abstract

    A controversial issue in psycholinguistics is the degree to which speakers employ audience design during language production. Hypothesising that a consideration of the listener’s needs is particularly relevant when the listener is under cognitive load, we had speakers describe objects for a listener performing an easy or a difficult simulated driving task. We predicted that speakers would introduce more redundancy in their descriptions in the difficult driving task, thereby accommodating the listener’s reduced cognitive capacity. The results showed that speakers did not adapt their descriptions to a change in the listener’s cognitive load. However, speakers who had experienced the driving task themselves before and who were presented with the difficult driving task first were more redundant than other speakers. These findings may suggest that speakers only consider the listener’s needs in the presence of strong enough cues, and do not update their beliefs about these needs during the task.
  • Vogels, J., & Van Bergen, G. (2017). Where to place inaccessible subjects in Dutch: The role of definiteness and animacy. Corpus linguistics and linguistic theory, 13(2), 369-398. doi:10.1515/cllt-2013-0021.

    Abstract

    Cross-linguistically, both subjects and topical information tend to be placed at the beginning of a sentence. Subjects are generally highly topical, causing both tendencies to converge on the same word order. However, subjects that lack prototypical topic properties may give rise to an incongruence between the preference to start a sentence with the subject and the preference to start a sentence with the most accessible information. We present a corpus study in which we investigate in what syntactic position (preverbal or postverbal) such low-accessible subjects are typically found in Dutch natural language. We examine the effects of both discourse accessibility (definiteness) and inherent accessibility (animacy). Our results show that definiteness and animacy interact in determining subject position in Dutch. Non-referential (bare) subjects are less likely to occur in preverbal position than definite subjects, and this tendency is reinforced when the subject is inanimate. This suggests that these two properties that make the subject less accessible together can ‘gang up’ against the subject first preference. The results support a probabilistic multifactorial account of syntactic variation.
  • Volker-Touw, C. M., de Koning, H. D., Giltay, J., De Kovel, C. G. F., van Kempen, T. S., Oberndorff, K., Boes, M., van Steensel, M. A., van Well, G. T., Blokx, W. A., Schalkwijk, J., Simon, A., Frenkel, J., & van Gijn, M. E. (2017). Erythematous nodes, urticarial rash and arthralgias in a large pedigree with NLRC4-related autoinflammatory disease, expansion of the phenotype. British Journal of Dermatology, 176(1), 244-248. doi:10.1111/bjd.14757.

    Abstract

    Autoinflammatory disorders (AID) are a heterogeneous group of diseases, characterized by an unprovoked innate immune response, resulting in recurrent or ongoing systemic inflammation and fever1-3. Inflammasomes are protein complexes with an essential role in pyroptosis and the caspase-1-mediated activation of the proinflammatory cytokines IL-1β, IL-17 and IL-18.
  • Von Stutterheim, C., Carroll, M., & Klein, W. (2009). New perspectives in analyzing aspectual distinctions across languages. In W. Klein, & P. Li (Eds.), The expression of time (pp. 195-216). Berlin: Mouton de Gruyter.
  • Von Holzen, K., & Bergmann, C. (2021). The development of infants’ responses to mispronunciations: A meta-analysis. Developmental Psychology, 57(1), 1-18. doi:10.1037/dev0001141.

    Abstract

    As they develop into mature speakers of their native language, infants must not only learn words but also the sounds that make up those words. To do so, they must strike a balance between accepting speaker dependent variation (e.g. mood, voice, accent), but appropriately rejecting variation when it (potentially) changes a word's meaning (e.g. cat vs. hat). This meta-analysis focuses on studies investigating infants' ability to detect mispronunciations in familiar words, or mispronunciation sensitivity. Our goal was to evaluate the development of infants' phonological representations for familiar words as well as explore the role of experimental manipulations related to theoretical questions and analysis choices. The results show that although infants are sensitive to mispronunciations, they still accept these altered forms as labels for target objects. Interestingly, this ability is not modulated by age or vocabulary size, suggesting that a mature understanding of native language phonology may be present in infants from an early age, possibly before the vocabulary explosion. These results also support several theoretical assumptions made in the literature, such as sensitivity to mispronunciation size and position of the mispronunciation. We also shed light on the impact of data analysis choices that may lead to different conclusions regarding the development of infants' mispronunciation sensitivity. Our paper concludes with recommendations for improved practice in testing infants' word and sentence processing on-line.
  • De Vos, J., Schriefers, H., & Lemhöfer, K. (2020). Does study language (Dutch versus English) influence study success of Dutch and German students in theNetherlands? Dutch Journal of Applied Linguistics, 9, 60-78. doi:10.1075/dujal.19008.dev.

    Abstract

    We investigated whether the language of instruction (Dutch or English) influenced the study success of 614 Dutch and German first-year psychology students in the Netherlands. The Dutch students who were instructed in Dutch studied in their native language (L1), the other students in a second language (L2). In addition, only the Dutch students studied in their home country. Both these variables could potentially influence study success, operationalised as the number of European Credits (ECs) the students obtained, their grades, and drop-out rates. The L1 group outperformed the three L2 groups with respect to grades, but there were no significant differences in ECs and drop-out rates (although descriptively, the L1 group still performed best). In conclusion, this study shows an advantage of studying in the L1 when it comes to grades, and thereby contributes to the current debate in the Dutch media regarding the desirability of offering degrees taught in English.
  • De Vos, C. (2009). [Review of the book Language complexity as an evolving variable ed. by Geoffrey Sampson, David Gil and Peter Trudgill]. LINGUIST List, 20.4275. Retrieved from http://linguistlist.org/issues/20/20-4275.html.
  • De Vos, C., Van der Kooij, E., & Crasborn, O. (2009). Mixed signals: Combining linguistic and affective functions of eyebrows in questions in Sign Language of the Netherlands. Language and Speech, 52(2/3), 315-339. doi:10.1177/0023830909103177.

    Abstract

    The eyebrows are used as conversational signals in face-to-face spoken interaction (Ekman, 1979). In Sign Language of the Netherlands (NGT), the eyebrows are typically furrowed in content questions, and raised in polar questions (Coerts, 1992). On the other hand, these eyebrow positions are also associated with anger and surprise, respectively, in general human communication (Ekman, 1993). This overlap in the functional load of the eyebrow positions results in a potential conflict for NGT signers when combining these functions simultaneously. In order to investigate the effect of the simultaneous realization of both functions on the eyebrow position we elicited instances of both question types with neutral affect and with various affective states. The data were coded using the Facial Action Coding System (FACS: Ekman, Friesen, & Hager, 2002) for type of brow movement as well as for intensity. FACS allows for the coding of muscle groups, which are termed Action Units (AUs) and which produce facial appearance changes. The results show that linguistic and affective functions of eyebrows may influence each other in NGT. That is, in surprised polar questions and angry content question a phonetic enhancement takes place of raising and furrowing, respectively. In the items with contrasting eyebrow movements, the grammatical and affective AUs are either blended (occur simultaneously) or they are realized sequentially. Interestingly, the absence of eyebrow raising (marked by AU 1+2) in angry polar questions, and the presence of eyebrow furrowing (realized by AU 4) in surprised content questions suggests that in general AU 4 may be phonetically stronger than AU 1 and AU 2, independent of its linguistic or affective function.
  • De Vos, J., Schriefers, H., Bosch, L. t., & Lemhöfer, K. (2019). Interactive L2 vocabulary acquisition in a lab-based immersion setting. Language, Cognition and Neuroscience, 34(7), 916-935. doi:10.1080/23273798.2019.1599127.

    Abstract

    ABSTRACTWe investigated to what extent L2 word learning in spoken interaction takes place when learners are unaware of taking part in a language learning study. Using a novel paradigm for approximating naturalistic (but not necessarily non-intentional) L2 learning in the lab, German learners of Dutch were led to believe that the study concerned judging the price of objects. Dutch target words (object names) were selected individually such that these words were unknown to the respective participant. Then, in a dialogue-like task with the experimenter, the participants were first exposed to and then tested on the target words. In comparison to a no-input control group, we observed a clear learning effect especially from the first two exposures, and better learning for cognates than for non-cognates, but no modulating effect of the exposure-production lag. Moreover, some of the acquired knowledge persisted over a six-month period.
  • Vosse, T., & Kempen, G. (2009). In defense of competition during syntactic ambiguity resolution. Journal of Psycholinguistic Research, 38(1), 1-9. doi:10.1007/s10936-008-9075-1.

    Abstract

    In a recent series of publications (Traxler et al. J Mem Lang 39:558–592, 1998; Van Gompel et al. J Mem Lang 52:284–307, 2005; see also Van Gompel et al. (In: Kennedy, et al.(eds) Reading as a perceptual process, Oxford, Elsevier pp 621–648, 2000); Van Gompel et al. J Mem Lang 45:225–258, 2001) eye tracking data are reported showing that globally ambiguous (GA) sentences are read faster than locally ambiguous (LA) counterparts. They argue that these data rule out “constraint-based” models where syntactic and conceptual processors operate concurrently and syntactic ambiguity resolution is accomplished by competition. Such models predict the opposite pattern of reading times. However, this argument against competition is valid only in conjunction with two standard assumptions in current constraint-based models of sentence comprehension: (1) that syntactic competitions (e.g., Which is the best attachment site of the incoming constituent?) are pooled together with conceptual competitions (e.g., Which attachment site entails the most plausible meaning?), and (2) that the duration of a competition is a function of the overall (pooled) quality score obtained by each competitor. We argue that it is not necessary to abandon competition as a successful basis for explaining parsing phenomena and that the above-mentioned reading time data can be accounted for by a parallel-interactive model with conceptual and syntactic processors that do not pool their quality scores together. Within the individual linguistic modules, decision-making can very well be competition-based.
  • Vosse, T., & Kempen, G. (2009). The Unification Space implemented as a localist neural net: Predictions and error-tolerance in a constraint-based parser. Cognitive Neurodynamics, 3, 331-346. doi:10.1007/s11571-009-9094-0.

    Abstract

    We introduce a novel computer implementation of the Unification-Space parser (Vosse & Kempen 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen & Harbusch 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least in a qualitative and rudimentary sense, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described.
  • Wagner, M. A., Broersma, M., McQueen, J. M., Dhaene, S., & Lemhöfer, K. (2021). Phonetic convergence to non-native speech: Acoustic and perceptual evidence. Journal of Phonetics, 88: 101076. doi:10.1016/j.wocn.2021.101076.

    Abstract

    While the tendency of speakers to align their speech to that of others acoustic-phonetically has been widely studied among native speakers, very few studies have examined whether natives phonetically converge to non-native speakers. Here we measured native Dutch speakers’ convergence to a non-native speaker with an unfamiliar accent in a novel non-interactive task. Furthermore, we assessed the role of participants’ perceptions of the non-native accent in their tendency to converge. In addition to a perceptual measure (AXB ratings), we examined convergence on different acoustic dimensions (e.g., vowel spectra, fricative CoG, speech rate, overall f0) to determine what dimensions, if any, speakers converge to. We further combined these two types of measures to discover what dimensions weighed in raters’ judgments of convergence. The results reveal overall convergence to our non-native speaker, as indexed by both perceptual and acoustic measures. However, the ratings suggest the stronger participants rated the non-native accent to be, the less likely they were to converge. Our findings add to the growing body of evidence that natives can phonetically converge to non-native speech, even without any apparent socio-communicative motivation to do so. We argue that our results are hard to integrate with a purely social view of convergence.
  • Wang, L., Hagoort, P., & Yang, Y. (2009). Semantic illusion depends on information structure: ERP evidence. Brain Research, 1282, 50-56. doi:10.1016/j.brainres.2009.05.069.

    Abstract

    Next to propositional content, speakers distribute information in their utterances in such a way that listeners can make a distinction between new (focused) and given (non-focused) information. This is referred to as information structure. We measured event-related potentials (ERPs) to explore the role of information structure in semantic processing. Following different questions in wh-question-answer pairs (e.g. What kind of vegetable did Ming buy for cooking today? /Who bought the vegetables for cooking today?), the answer sentences (e.g., Ming bought eggplant/beef to cook today.) contained a critical word, which was either semantically appropriate (eggplant) or inappropriate (beef), and either focus or non-focus. The results showed a full N400 effect only when the critical words were in focus position. In non-focus position a strongly reduced N400 effect was observed, in line with the well-known semantic illusion effect. The results suggest that information structure facilitates semantic processing by devoting more resources to focused information.
  • Warner, N., Fountain, A., & Tucker, B. V. (2009). Cues to perception of reduced flaps. Journal of the Acoustical Society of America, 125(5), 3317-3327. doi:10.1121/1.3097773.

    Abstract

    Natural, spontaneous speech (and even quite careful speech) often shows extreme reduction in many speech segments, even resulting in apparent deletion of consonants. Where the flap ([(sic)]) allophone of /t/ and /d/ is expected in American English, one frequently sees an approximant-like or even vocalic pattern, rather than a clear flap. Still, the /t/ or /d/ is usually perceived, suggesting the acoustic characteristics of a reduced flap are sufficient for perception of a consonant. This paper identifies several acoustic characteristics of reduced flaps based on previous acoustic research (size of intensity dip, consonant duration, and F4 valley) and presents phonetic identification data for continua that manipulate these acoustic characteristics of reduction. The results indicate that the most obvious types of acoustic variability seen in natural flaps do affect listeners' percept of a consonant, but not sufficiently to completely account for the percept. Listeners are affected by the acoustic characteristics of consonant reduction, but they are also very skilled at evaluating variability along the acoustic dimensions that realize reduction.

    Files private

    Request files
  • Warner, N., & Cutler, A. (2017). Stress effects in vowel perception as a function of language-specific vocabulary patterns. Phonetica, 74, 81-106. doi:10.1159/000447428.

    Abstract

    Background/Aims: Evidence from spoken word recognition suggests that for English listeners, distinguishing full versus reduced vowels is important, but discerning stress differences involving the same full vowel (as in mu- from music or museum) is not. In Dutch, in contrast, the latter distinction is important. This difference arises from the relative frequency of unstressed full vowels in the two vocabularies. The goal of this paper is to determine how this difference in the lexicon influences the perception of stressed versus unstressed vowels. Methods: All possible sequences of two segments (diphones) in Dutch and in English were presented to native listeners in gated fragments. We recorded identification performance over time throughout the speech signal. The data were here analysed specifically for patterns in perception of stressed versus unstressed vowels. Results: The data reveal significantly larger stress effects (whereby unstressed vowels are harder to identify than stressed vowels) in English than in Dutch. Both language-specific and shared patterns appear regarding which vowels show stress effects. Conclusion: We explain the larger stress effect in English as reflecting the processing demands caused by the difference in use of unstressed vowels in the lexicon. The larger stress effect in English is due to relative inexperience with processing unstressed full vowels
  • Warner, N., Luna, Q., Butler, L., & Van Volkinburg, H. (2009). Revitalization in a scattered language community: Problems and methods from the perspective of Mutsun language revitalization. International Journal of the Sociology of Language, 198, 135-148. doi:10.1515/IJSL.2009.031.

    Abstract

    This article addresses revitalization of a dormant language whose prospective speakers live in scattered geographical areas. In comparison to increasing the usage of an endangered language, revitalizing a dormant language (one with no living speakers) requires different methods to gain knowledge of the language. Language teaching for a dormant language with a scattered community presents different problems from other teaching situations. In this article, we discuss the types of tasks that must be accomplished for dormant-language revitalization, with particular focus on development of teaching materials. We also address the role of computer technologies, arguing that each use of technology should be evaluated for how effectively it increases fluency. We discuss methods for achieving semi-fluency for the first new speakers of a dormant language, and for spreading the language through the community.
  • Warren, C. M., Tona, K. D., Ouwekerk, L., Van Paridon, J., Poletiek, F. H., Bosch, J. A., & Nieuwenhuis, S. (2019). The neuromodulatory and hormonal effects of transcutaneous vagus nerve stimulation as evidenced by salivary alpha amylase, salivary cortisol, pupil diameter, and the P3 event-related potential. Brain Stimulation, 12(3), 635-642. doi:10.1016/j.brs.2018.12.224.

    Abstract

    Background

    Transcutaneous vagus nerve stimulation (tVNS) is a new, non-invasive technique being investigated as an intervention for a variety of clinical disorders, including epilepsy and depression. It is thought to exert its therapeutic effect by increasing central norepinephrine (NE) activity, but the evidence supporting this notion is limited.
    Objective

    In order to test for an impact of tVNS on psychophysiological and hormonal indices of noradrenergic function, we applied tVNS in concert with assessment of salivary alpha amylase (SAA) and cortisol, pupil size, and electroencephalograph (EEG) recordings.
    Methods

    Across three experiments, we applied real and sham tVNS to 61 healthy participants while they performed a set of simple stimulus-discrimination tasks. Before and after the task, as well as during one break, participants provided saliva samples and had their pupil size recorded. EEG was recorded throughout the task. The target for tVNS was the cymba conchae, which is heavily innervated by the auricular branch of the vagus nerve. Sham stimulation was applied to the ear lobe.
    Results

    P3 amplitude was not affected by tVNS (Experiment 1A: N=24; Experiment 1B: N=20; Bayes factor supporting null model=4.53), nor was pupil size (Experiment 2: N=16; interaction of treatment and time: p=0.79). However, tVNS increased SAA (Experiments 1A and 2: N=25) and attenuated the decline of salivary cortisol compared to sham (Experiment 2: N=17), as indicated by significant interactions involving treatment and time (p=.023 and p=.040, respectively).
    Conclusion

    These findings suggest that tVNS modulates hormonal indices but not psychophysiological indices of noradrenergic function.
  • Waymel, A., Friedrich, P., Bastian, P.-A., Forkel, S. J., & Thiebaut de Schotten, M. (2020). Anchoring the human olfactory system within a functional gradient. NeuroImage, 216: 116863. doi:10.1016/j.neuroimage.2020.116863.

    Abstract

    Margulies et al. (2016) demonstrated the existence of at least five independent functional connectivity gradients in the human brain. However, it is unclear how these functional gradients might link to anatomy. The dual origin theory proposes that differences in cortical cytoarchitecture originate from two trends of progressive differentiation between the different layers of the cortex, referred to as the hippocampocentric and olfactocentric systems. When conceptualising the functional connectivity gradients within the evolutionary framework of the Dual Origin theory, the first gradient likely represents the hippocampocentric system anatomically. Here we expand on this concept and demonstrate that the fifth gradient likely links to the olfactocentric system. We describe the anatomy of the latter as well as the evidence to support this hypothesis. Together, the first and fifth gradients might help to model the Dual Origin theory of the human brain and inform brain models and pathologies.
  • Weber, K., Christiansen, M., Indefrey, P., & Hagoort, P. (2019). Primed from the start: Syntactic priming during the first days of language learning. Language Learning, 69(1), 198-221. doi:10.1111/lang.12327.

    Abstract

    New linguistic information must be integrated into our existing language system. Using a novel experimental task that incorporates a syntactic priming paradigm into artificial language learning, we investigated how new grammatical regularities and words are learned. This innovation allowed us to control the language input the learner received, while the syntactic priming paradigm provided insight into the nature of the underlying syntactic processing machinery. The results of the present study pointed to facilitatory syntactic processing effects within the first days of learning: Syntactic and lexical priming effects revealed participants’ sensitivity to both novel words and word orders. This suggested that novel syntactic structures and their meaning (form–function mapping) can be acquired rapidly through incidental learning. More generally, our study indicated similar mechanisms for learning and processing in both artificial and natural languages, with implications for the relationship between first and second language learning.
  • Weber, K., Micheli, C., Ruigendijk, E., & Rieger, J. (2019). Sentence processing is modulated by the current linguistic environment and a priori information: An fMRI study. Brain and Behavior, 9(7): e01308. doi:10.1002/brb3.1308.

    Abstract

    Introduction
    Words are not processed in isolation but in rich contexts that are used to modulate and facilitate language comprehension. Here, we investigate distinct neural networks underlying two types of contexts, the current linguistic environment and verb‐based syntactic preferences.

    Methods
    We had two main manipulations. The first was the current linguistic environment, where the relative frequencies of two syntactic structures (prepositional object [PO] and double‐object [DO]) would either follow everyday linguistic experience or not. The second concerned the preference toward one or the other structure depending on the verb; learned in everyday language use and stored in memory. German participants were reading PO and DO sentences in German while brain activity was measured with functional magnetic resonance imaging.

    Results
    First, the anterior cingulate cortex (ACC) showed a pattern of activation that integrated the current linguistic environment with everyday linguistic experience. When the input did not match everyday experience, the unexpected frequent structure showed higher activation in the ACC than the other conditions and more connectivity from the ACC to posterior parts of the language network. Second, verb‐based surprisal of seeing a structure given a verb (PO verb preference but DO structure presentation) resulted, within the language network (left inferior frontal and left middle/superior temporal gyrus) and the precuneus, in increased activation compared to a predictable verb‐structure pairing.

    Conclusion
    In conclusion, (1) beyond the canonical language network, brain areas engaged in prediction and error signaling, such as the ACC, might use the statistics of syntactic structures to modulate language processing, (2) the language network is directly engaged in processing verb preferences. These two networks show distinct influences on sentence processing.

    Additional information

    Supporting information
  • Weber, K., & Indefrey, P. (2009). Syntactic priming in German–English bilinguals during sentence comprehension. Neuroimage, 46, 1164-1172. doi:10.1016/j.neuroimage.2009.03.040.

    Abstract

    A longstanding question in bilingualism is whether syntactic information is shared between the two language processing systems. We used an fMRI repetition suppression paradigm to investigate syntactic priming in reading comprehension in German–English late-acquisition bilinguals. In comparison to conventional subtraction analyses in bilingual experiments, repetition suppression has the advantage of being able to detect neuronal populations that are sensitive to properties that are shared by consecutive stimuli. In this study, we manipulated the syntactic structure between prime and target sentences. A sentence with a passive sentence structure in English was preceded either by a passive or by an active sentence in English or German. We looked for repetition suppression effects in left inferior frontal, left precentral and left middle temporal regions of interest. These regions were defined by a contrast of all non-target sentences in German and English versus the baseline of sentence-format consonant strings. We found decreases in activity (repetition suppression effects) in these regions of interest following the repetition of syntactic structure from the first to the second language and within the second language.
    Moreover, a separate behavioural experiment using a word-by-word reading paradigm similar to the fMRI experiment showed faster reading times for primed compared to unprimed English target sentences regardless of whether they were preceded by an English or a German sentence of the same structure.
    We conclude that there is interaction between the language processing systems and that at least some syntactic information is shared between a bilingual's languages with similar syntactic structures.

    Files private

    Request files
  • Wegman, J., Tyborowska, A., Hoogman, M., Vasquez, A. A., & Janzen, G. (2017). The brain-derived neurotrophic factor Val66Met polymorphism affects encoding of object locations during active navigation. European Journal of Neuroscience, 45(12), 1501-1511. doi:10.1111/ejn.13416.

    Abstract

    The brain-derived neurotrophic factor (BDNF) was shown to be involved in spatial memory and spatial strategy preference. A naturally occurring single nucleotide polymorphism of the BDNF gene (Val66Met) affects activity-dependent secretion of BDNF. The current event-related fMRI study on preselected groups of ‘Met’ carriers and homozygotes of the ‘Val’ allele investigated the role of this polymorphism on encoding and retrieval in a virtual navigation task in 37 healthy volunteers. In each trial, participants navigated toward a target object. During encoding, three positional cues (columns) with directional cues (shadows) were available. During retrieval, the invisible target had to be replaced while either two objects without shadows (objects trial) or one object with a shadow (shadow trial) were available. The experiment consisted of blocks, informing participants of which trial type would be most likely to occur during retrieval. We observed no differences between genetic groups in task performance or time to complete the navigation tasks. The imaging results show that Met carriers compared to Val homozygotes activate the left hippocampus more during successful object location memory encoding. The observed effects were independent of non-significant performance differences or volumetric differences in the hippocampus. These results indicate that variations of the BDNF gene affect memory encoding during spatial navigation, suggesting that lower levels of BDNF in the hippocampus results in less efficient spatial memory processing
  • Weissbart, H., Kandylaki, K. D., & Reichenbach, T. (2020). Cortical tracking of surprisal during continuous speech comprehension. Journal of Cognitive Neuroscience, 32, 155-166. doi:10.1162/jocn_a_01467.

    Abstract

    Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as of the precision of such a prediction. However, they have focused on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension, a listener hears many successive words whose predictability and precision vary over a large range. Here, we show that cortical activity in different frequency bands tracks word surprisal in continuous natural speech and that this tracking is modulated by precision. We obtain these results through quantifying surprisal and precision from naturalistic speech using a deep neural network and through relating these speech features to EEG responses of human volunteers acquired during auditory story comprehension. We find significant cortical tracking of surprisal at low frequencies, including the delta band as well as in the higher frequency beta and gamma bands, and observe that the tracking is modulated by the precision. Our results pave the way to further investigate the neurobiology of natural speech comprehension.
  • Weissenborn, J. (1981). L'acquisition des prepositions spatiales: problemes cognitifs et linguistiques. In C. Schwarze (Ed.), Analyse des prépositions: IIIme colloque franco-allemand de linguistique théorique du 2 au 4 février 1981 à Constance (pp. 251-285). Tübingen: Niemeyer.
  • Wells, J. B., Christiansen, M. H., Race, D. S., Acheson, D. J., & MacDonald, M. C. (2009). Experience and sentence processing: Statistical learning and relative clause comprehension. Cognitive Psychology, 58(2), 250-271. doi:10.1016/j.cogpsych.2008.08.002.

    Abstract

    Many explanations of the difficulties associated with interpreting object relative clauses appeal to the demands that object relatives make on working memory. MacDonald and Christiansen [MacDonald, M. C., & Christiansen, M. H. (2002). Reassessing working memory: Comment on Just and Carpenter (1992) and Waters and Caplan (1996). Psychological Review, 109, 35-54] pointed to variations in reading experience as a source of differences, arguing that the unique word order of object relatives makes their processing more difficult and more sensitive to the effects of previous experience than the processing of subject relatives. This hypothesis was tested in a large-scale study manipulating reading experiences of adults over several weeks. The group receiving relative clause experience increased reading speeds for object relatives more than for subject relatives, whereas a control experience group did not. The reading time data were compared to performance of a computational model given different amounts of experience. The results support claims for experience-based individual differences and an important role for statistical learning in sentence comprehension processes.
  • Whitaker, K., & Guest, O. (2020). #bropenscience is broken science: Kirstie Whitaker and Olivia Guest ask how open ‘open science’ really is. The Psychologist, 33, 34-37.
  • Wiese, R., Orzechowska, P., Alday, P. M., & Ulbrich, C. (2017). Structural Principles or Frequency of Use? An ERP Experiment on the Learnability of Consonant Clusters. Frontiers in Psychology, 7: 2005. doi:10.3389/fpsyg.2016.02005.

    Abstract

    Phonological knowledge of a language involves knowledge about which segments can be combined under what conditions. Languages vary in the quantity and quality of licensed combinations, in particular sequences of consonants, with Polish being a language with a large inventory of such combinations. The present paper reports on a two-session experiment in which Polish-speaking adult participants learned nonce words with final consonant clusters. The aim was to study the role of two factors which potentially play a role in the learning of phonotactic structures: the phonological principle of sonority (ordering sound segments within the syllable according to their inherent loudness) and the (non-) existence as a usage-based phenomenon. EEG responses in two different time windows (adversely to behavioral responses) show linguistic processing by native speakers of Polish to be sensitive to both distinctions, in spite of the fact that Polish is rich in sonority-violating clusters. In particular, a general learning effect in terms of an N400 effect was found which was demonstrated to be different for sonority-obeying clusters than for sonority-violating clusters. Furthermore, significant interactions of formedness and session, and of existence and session, demonstrate that both factors, the sonority principle and the frequency pattern, play a role in the learning process.
  • Wilkinson, G. S., Adams, D. M., Haghani, A., Lu, A. T., Zoller, J., Breeze, C. E., Arnold, B. D., Ball, H. C., Carter, G. G., Cooper, L. N., Dechmann, D. K. N., Devanna, P., Fasel, N. J., Galazyuk, A. V., Günther, L., Hurme, E., Jones, G., Knörnschild, M., Lattenkamp, E. Z., Li, C. Z. and 17 moreWilkinson, G. S., Adams, D. M., Haghani, A., Lu, A. T., Zoller, J., Breeze, C. E., Arnold, B. D., Ball, H. C., Carter, G. G., Cooper, L. N., Dechmann, D. K. N., Devanna, P., Fasel, N. J., Galazyuk, A. V., Günther, L., Hurme, E., Jones, G., Knörnschild, M., Lattenkamp, E. Z., Li, C. Z., Mayer, F., Reinhardt, J. A., Medellin, R. A., Nagy, M., Pope, B., Power, M. L., Ransome, R. D., Teeling, E. C., Vernes, S. C., Zamora-Mejías, D., Zhang, J., Faure, P. A., Greville, L. J., Herrera M., L. G., Flores-Martínez, J. J., & Horvath, S. (2021). DNA methylation predicts age and provides insight into exceptional longevity of bats. Nature Communications, 12: 1615. doi:10.1038/s41467-021-21900-2.

    Abstract

    Exceptionally long-lived species, including many bats, rarely show overt signs of aging, making it difficult to determine why species differ in lifespan. Here, we use DNA methylation (DNAm) profiles from 712 known-age bats, representing 26 species, to identify epigenetic changes associated with age and longevity. We demonstrate that DNAm accurately predicts chronological age. Across species, longevity is negatively associated with the rate of DNAm change at age-associated sites. Furthermore, analysis of several bat genomes reveals that hypermethylated age- and longevity-associated sites are disproportionately located in promoter regions of key transcription factors (TF) and enriched for histone and chromatin features associated with transcriptional regulation. Predicted TF binding site motifs and enrichment analyses indicate that age-related methylation change is influenced by developmental processes, while longevity-related DNAm change is associated with innate immunity or tumorigenesis genes, suggesting that bat longevity results from augmented immune response and cancer suppression.

    Additional information

    supplementary information
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2009). Body-specific motor imagery of hand actions: Neural evidence from right- and left-handers. Frontiers in Human Neuroscience, 3: 39, pp. 39. doi:10.3389/neuro.09.039.2009.

    Abstract

    If motor imagery uses neural structures involved in action execution, then the neural correlates of imagining an action should differ between individuals who tend to execute the action differently. Here we report fMRI data showing that motor imagery is influenced by the way people habitually perform motor actions with their particular bodies; that is, motor imagery is ‘body-specific’ (Casasanto, 2009). During mental imagery for complex hand actions, activation of cortical areas involved in motor planning and execution was left-lateralized in right-handers but right-lateralized in left-handers. We conclude that motor imagery involves the generation of an action plan that is grounded in the participant’s motor habits, not just an abstract representation at the level of the action’s goal. People with different patterns of motor experience form correspondingly different neurocognitive representations of imagined actions.
  • Willems, R. M., & Hagoort, P. (2009). Broca's region: Battles are not won by ignoring half of the facts. Trends in Cognitive Sciences, 13(3), 101. doi:10.1016/j.tics.2008.12.001.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2009). Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language. Neuroimage, 47, 1992-2004. doi:10.1016/j.neuroimage.2009.05.066.

    Abstract

    Several studies indicate that both posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG) and left inferior frontal gyrus (LIFG) are involved in integrating information from different modalities. Here we investigated the respective roles of these two areas in integration of action and language information. We exploited the fact that the semantic relationship between language and different forms of action (i.e. co-speech gestures and pantomimes) is radically different. Speech and co-speech gestures are always produced together, and gestures are not unambiguously understood without speech. On the contrary, pantomimes are not necessarily produced together with speech and can be easily understood without speech. We presented speech together with these two types of communicative hand actions in matching or mismatching combinations to manipulate semantic integration load. Left and right pSTS/MTG were only involved in semantic integration of speech and pantomimes. Left IFG on the other hand was involved in integration of speech and co-speech gestures as well as of speech and pantomimes. Effective connectivity analyses showed that depending upon the semantic relationship between language and action, LIFG modulates activation levels in left pSTS.

    This suggests that integration in pSTS/MTG involves the matching of two input streams for which there is a relatively stable common object representation, whereas integration in LIFG is better characterized as the on-line construction of a new and unified representation of the input streams. In conclusion, pSTS/MTG and LIFG are differentially involved in multimodal integration, crucially depending upon the semantic relationship between the input streams.

    Additional information

    Supplementary table S1
  • Willems, R. M., & Hagoort, P. (2009). Hand preference influences neural correlates of action observation. Brain Research, 1269, 90-104. doi:10.1016/j.brainres.2009.02.057.

    Abstract

    It has been argued that we map observed actions onto our own motor system. Here we added to this issue by investigating whether hand preference influences the neural correlates of action observation of simple, essentially meaningless hand actions. Such an influence would argue for an intricate neural coupling between action production and action observation, which goes beyond effects of motor repertoire or explicit motor training, as has been suggested before. Indeed, parts of the human motor system exhibited a close coupling between action production and action observation. Ventral premotor and inferior and superior parietal cortices showed differential activation for left- and right-handers that was similar during action production as well as during action observation. This suggests that mapping observed actions onto the observer's own motor system is a core feature of action observation - at least for actions that do not have a clear goal or meaning. Basic differences in the way we act upon the world are not only reflected in neural correlates of action production, but can also influence the brain basis of action observation.
  • Willems, R. M., & Peelen, M. V. (2021). How context changes the neural basis of perception and language. iScience, 24(5): 102392. doi:10.1016/j.isci.2021.102392.

    Abstract

    Cognitive processes—from basic sensory analysis to language understanding—are typically contextualized. While the importance of considering context for understanding cognition has long been recognized in psychology and philosophy, it has not yet had much impact on cognitive neuroscience research, where cognition is often studied in decontextualized paradigms. Here, we present examples of recent studies showing that context changes the neural basis of diverse cognitive processes, including perception, attention, memory, and language. Within the domains of perception and language, we review neuroimaging results showing that context interacts with stimulus processing, changes activity in classical perception and language regions, and recruits additional brain regions that contribute crucially to naturalistic perception and language. We discuss how contextualized cognitive neuroscience will allow for discovering new principles of the mind and brain.
  • Willems, R. M., Nastase, S. A., & Milivojevic, B. (2020). Narratives for Neuroscience. Trends in Neurosciences, 43(5), 271-273. doi:10.1016/j.tins.2020.03.003.

    Abstract

    People organize and convey their thoughts according to narratives. However, neuroscientists are often reluctant to incorporate narrative stimuli into their experiments. We argue that narratives deserve wider adoption in human neuroscience because they tap into the brain’s native machinery for representing the world and provide rich variability for testing hypotheses.
  • Wilson, B., Spierings, M., Ravignani, A., Mueller, J. L., Mintz, T. H., Wijnen, F., Van der Kant, A., Smith, K., & Rey, A. (2020). Non‐adjacent dependency learning in humans and other animals. Topics in Cognitive Science, 12(3), 843-858. doi:10.1111/tops.12381.

    Abstract

    Learning and processing natural language requires the ability to track syntactic relationships between words and phrases in a sentence, which are often separated by intervening material. These nonadjacent dependencies can be studied using artificial grammar learning paradigms and structured sequence processing tasks. These approaches have been used to demonstrate that human adults, infants and some nonhuman animals are able to detect and learn dependencies between nonadjacent elements within a sequence. However, learning nonadjacent dependencies appears to be more cognitively demanding than detecting dependencies between adjacent elements, and only occurs in certain circumstances. In this review, we discuss different types of nonadjacent dependencies in language and in artificial grammar learning experiments, and how these differences might impact learning. We summarize different types of perceptual cues that facilitate learning, by highlighting the relationship between dependent elements bringing them closer together either physically, attentionally, or perceptually. Finally, we review artificial grammar learning experiments in human adults, infants, and nonhuman animals, and discuss how similarities and differences observed across these groups can provide insights into how language is learned across development and how these language‐related abilities might have evolved.
  • Wirthlin, M., Chang, E. F., Knörnschild, M., Krubitzer, L. A., Mello, C. V., Miller, C. T., Pfenning, A. R., Vernes, S. C., Tchernichovski, O., & Yartsev, M. M. (2019). A modular approach to vocal learning: Disentangling the diversity of a complex behavioral trait. Neuron, 104(1), 87-99. doi:10.1016/j.neuron.2019.09.036.

    Abstract

    Vocal learning is a behavioral trait in which the social and acoustic environment shapes the vocal repertoire of individuals. Over the past century, the study of vocal learning has progressed at the intersection of ecology, physiology, neuroscience, molecular biology, genomics, and evolution. Yet, despite the complexity of this trait, vocal learning is frequently described as a binary trait, with species being classified as either vocal learners or vocal non-learners. As a result, studies have largely focused on a handful of species for which strong evidence for vocal learning exists. Recent studies, however, suggest a continuum in vocal learning capacity across taxa. Here, we further suggest that vocal learning is a multi-component behavioral phenotype comprised of distinct yet interconnected modules. Discretizing the vocal learning phenotype into its constituent modules would facilitate integration of findings across a wider diversity of species, taking advantage of the ways in which each excels in a particular module, or in a specific combination of features. Such comparative studies can improve understanding of the mechanisms and evolutionary origins of vocal learning. We propose an initial set of vocal learning modules supported by behavioral and neurobiological data and highlight the need for diversifying the field in order to disentangle the complexity of the vocal learning phenotype.

    Files private

    Request files
  • Wittenburg, P., Lautenschlager, M., Thiemann, H., Baldauf, C., & Trilsbeek, P. (2020). FAIR Practices in Europe. Data Intelligence, 2(1-2), 257-263. doi:10.1162/dint_a_00048.

    Abstract

    Institutions driving fundamental research at the cutting edge such as for example from the Max Planck Society (MPS) took steps to optimize data management and stewardship to be able to address new scientific questions. In this paper we selected three institutes from the MPS from the areas of humanities, environmental sciences and natural sciences as examples to indicate the efforts to integrate large amounts of data from collaborators worldwide to create a data space that is ready to be exploited to get new insights based on data intensive science methods. For this integration the typical challenges of fragmentation, bad quality and also social differences had to be overcome. In all three cases, well-managed repositories that are driven by the scientific needs and harmonization principles that have been agreed upon in the community were the core pillars. It is not surprising that these principles are very much aligned with what have now become the FAIR principles. The FAIR principles confirm the correctness of earlier decisions and their clear formulation identified the gaps which the projects need to address.
  • Wnuk, E., Laophairoj, R., & Majid, A. (2020). Smell terms are not rara: A semantic investigation of odor vocabulary in Thai. Linguistics, 58(4), 937-966. doi:10.1515/ling-2020-0009.
  • Wnuk, E., De Valk, J. M., Huisman, J. L. A., & Majid, A. (2017). Hot and cold smells: Odor-temperature associations across cultures. Frontiers in Psychology, 8: 1373. doi:10.3389/fpsyg.2017.01373.

    Abstract

    It is often assumed odors are associated with hot and cold temperature, since odor processing may trigger thermal sensations, such as coolness in the case of mint. It is unknown, however, whether people make consistent temperature associations for a variety of everyday odors, and, if so, what determines them. Previous work investigating the bases of cross-modal associations suggests a number of possibilities, including universal forces (e.g., perception), as well as culture-specific forces (e.g., language and cultural beliefs). In this study, we examined odor-temperature associations in three cultures—Maniq (N = 11), Thai (N = 24), and Dutch (N = 24)—who differ with respect to their cultural preoccupation with odors, their odor lexicons, and their beliefs about the relationship of odors (and odor objects) to temperature. Participants matched 15 odors to temperature by touching cups filled with hot or cold water, and described the odors in their native language. The results showed no consistent associations among the Maniq, and only a handful of consistent associations between odor and temperature among the Thai and Dutch. The consistent associations differed across the two groups, arguing against their universality. Further analysis revealed cross-modal associations could not be explained by language, but could be the result of cultural beliefs
  • Woensdregt, M., Cummins, C., & Smith, K. (2021). A computational model of the cultural co-evolution of language and mindreading. Synthese, 199, 1347-1385. doi:10.1007/s11229-020-02798-7.

    Abstract

    Several evolutionary accounts of human social cognition posit that language has co-evolved with the sophisticated mindreading abilities of modern humans. It has also been argued that these mindreading abilities are the product of cultural, rather than biological, evolution. Taken together, these claims suggest that the evolution of language has played an important role in the cultural evolution of human social cognition. Here we present a new computational model which formalises the assumptions that underlie this hypothesis, in order to explore how language and mindreading interact through cultural evolution. This model treats communicative behaviour as an interplay between the context in which communication occurs, an agent’s individual perspective on the world, and the agent’s lexicon. However, each agent’s perspective and lexicon are private mental representations, not directly observable to other agents. Learners are therefore confronted with the task of jointly inferring the lexicon and perspective of their cultural parent, based on their utterances in context. Simulation results show that given these assumptions, an informative lexicon evolves not just under a pressure to be successful at communicating, but also under a pressure for accurate perspective-inference. When such a lexicon evolves, agents become better at inferring others’ perspectives; not because their innate ability to learn about perspectives changes, but because sharing a language (of the right type) with others helps them to do so.
  • Wolf, M. C., Meyer, A. S., Rowland, C. F., & Hintz, F. (2021). The effects of input modality, word difficulty and reading experience on word recognition accuracy. Collabra: Psychology, 7(1): 24919. doi:10.1525/collabra.24919.

    Abstract

    Language users encounter words in at least two different modalities. Arguably, the most frequent encounters are in spoken or written form. Previous research has shown that – compared to the spoken modality – written language features more difficult words. Thus, frequent reading might have effects on word recognition. In the present study, we investigated 1) whether input modality (spoken, written, or bimodal) has an effect on word recognition accuracy, 2) whether this modality effect interacts with word difficulty, 3) whether the interaction of word difficulty and reading experience impacts word recognition accuracy, and 4) whether this interaction is influenced by input modality. To do so, we re-analysed a dataset that was collected in the context of a vocabulary test development to assess in which modality test words should be presented. Participants had carried out a word recognition task, where non-words and words of varying difficulty were presented in auditory, visual and audio-visual modalities. In addition to this main experiment, participants had completed a receptive vocabulary and an author recognition test to measure their reading experience. Our re-analyses did not reveal evidence for an effect of input modality on word recognition accuracy, nor for interactions with word difficulty or language experience. Word difficulty interacted with reading experience in that frequent readers were more accurate in recognizing difficult words than individuals who read less frequently. Practical implications are discussed.
  • Wolf, M. C., Muijselaar, M. M. L., Boonstra, A. M., & De Bree, E. H. (2019). The relationship between reading and listening comprehension: Shared and modality-specific components. Reading and Writing, 32(7), 1747-1767. doi:10.1007/s11145-018-9924-8.

    Abstract

    This study aimed to increase our understanding on the relationship between reading and listening comprehension. Both in comprehension theory and in educational practice, reading and listening comprehension are often seen as interchangeable, overlooking modality-specific aspects of them separately. Three questions were addressed. First, it was examined to what extent reading and listening comprehension comprise modality-specific, distinct skills or an overlapping, domain-general skill in terms of the amount of explained variance in one comprehension type by the opposite comprehension type. Second, general and modality-unique subskills of reading and listening comprehension were sought by assessing the contributions of the foundational skills word reading fluency, vocabulary, memory, attention, and inhibition to both comprehension types. Lastly, the practice of using either listening comprehension or vocabulary as a proxy of general comprehension was investigated. Reading and listening comprehension tasks with the same format were assessed in 85 second and third grade children. Analyses revealed that reading comprehension explained 34% of the variance in listening comprehension, and listening comprehension 40% of reading comprehension. Vocabulary and word reading fluency were found to be shared contributors to both reading and listening comprehension. None of the other cognitive skills contributed significantly to reading or listening comprehension. These results indicate that only part of the comprehension process is indeed domain-general and not influenced by the modality in which the information is provided. Especially vocabulary seems to play a large role in this domain-general part. The findings warrant a more prominent focus of modality-specific aspects of both reading and listening comprehension in research and education.
  • Wong, M. M. K., Watson, L. M., & Becker, E. B. E. (2017). Recent advances in modelling of cerebellar ataxia using induced pluripotent stem cells. Journal of Neurology & Neuromedicine, 2(7), 11-15. doi:10.29245/2572.942X/2017/7.1134.

    Abstract

    The cerebellar ataxias are a group of incurable brain disorders that are caused primarily by the progressive dysfunction and degeneration of cerebellar Purkinje cells. The lack of reliable disease models for the heterogeneous ataxias has hindered the understanding of the underlying pathogenic mechanisms as well as the development of effective therapies for these devastating diseases. Recent advances in the field of induced pluripotent stem cell (iPSC) technology offer new possibilities to better understand and potentially reverse disease pathology. Given the neurodevelopmental phenotypes observed in several types of ataxias, iPSC-based models have the potential to provide significant insights into disease progression, as well as opportunities for the development of early intervention therapies. To date, however, very few studies have successfully used iPSC-derived cells to cerebellar ataxias. In this review, we focus on recent breakthroughs in generating human iPSC-derived Purkinje cells. We also highlight the future challenges that will need to be addressed in order to fully exploit these models for the modelling of the molecular mechanisms underlying cerebellar ataxias and the development of effective therapeutics.
  • Wongratwanich, P., Shimabukuro, K., Konishi, M., Nagasaki, T., Ohtsuka, M., Suei, Y., Nakamoto, T., Verdonschot, R. G., Kanesaki, T., Sutthiprapaporn, P., & Kakimoto, N. (2021). Do various imaging modalities provide potential early detection and diagnosis of medication-related osteonecrosis of the jaw? A review. Dentomaxillofacial Radiology, 50: 20200417. doi:10.1259/dmfr.20200417.

    Abstract


    Objective: Patients with medication-related osteonecrosis of the jaw (MRONJ) often visit their dentists at advanced stages and subsequently require treatments that greatly affect quality of life. Currently, no clear diagnostic criteria exist to assess MRONJ, and the definitive diagnosis solely relies on clinical bone exposure. This ambiguity leads to a diagnostic delay, complications, and unnecessary burden. This article aims to identify imaging modalities' usage and findings of MRONJ to provide possible approaches for early detection.

    Methods: Literature searches were conducted using PubMed, Web of Science, Scopus, and Cochrane Library to review all diagnostic imaging modalities for MRONJ.

    Results: Panoramic radiography offers a fundamental understanding of the lesions. Imaging findings were comparable between non-exposed and exposed MRONJ, showing osteolysis, osteosclerosis, and thickened lamina dura. Mandibular cortex index Class II could be a potential early MRONJ indicator. While three-dimensional modalities, CT and CBCT, were able to show more features unique to MRONJ such as a solid type periosteal reaction, buccal predominance of cortical perforation, and bone-within-bone appearance. MRI signal intensities of vital bones are hypointense on T1WI and hyperintense on T2WI and STIR when necrotic bone shows hypointensity on all T1WI, T2WI, and STIR. Functional imaging is the most sensitive method but is usually performed in metastasis detection rather than being a diagnostic tool for early MRONJ.

    Conclusion: Currently, MRONJ-specific imaging features cannot be firmly established. However, the current data are valuable as it may lead to a more efficient diagnostic procedure along with a more suitable selection of imaging modalities.
  • Wood, N. (2009). Field recording for dummies. In A. Majid (Ed.), Field manual volume 12 (pp. V). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Xiong, K., Verdonschot, R. G., & Tamaoka, K. (2020). The time course of brain activity in reading identical cognates: An ERP study of Chinese - Japanese bilinguals. Journal of Neurolinguistics, 55: 100911. doi:10.1016/j.jneuroling.2020.100911.

    Abstract

    Previous studies suggest that bilinguals' lexical access is language non-selective, especially for orthographically identical translation equivalents across languages (i.e., identical cognates). The present study investigated how such words (e.g., meaning "school" in both Chinese and Japanese) are processed in the (late) Chinese - Japanese bilingual brain. Using an L2-Japanese lexical decision task, both behavioral and electrophysiological data were collected. Reaction times (RTs), as well as the N400 component, showed that cognates are more easily recognized than non-cognates. Additionally, an early component (i.e., the N250), potentially reflecting activation at the word-form level, was also found. Cognates elicited a more positive N250 than non-cognates in the frontal region, indicating that the cognate facilitation effect occurred at an early stage of word formation for languages with logographic scripts.
  • Yager, J., & Burenhult, N. (2017). Jedek: a newly discovered Aslian variety of Malaysia. Linguistic Typology, 21(3), 493-545. doi:10.1515/lingty-2017-0012.

    Abstract

    Jedek is a previously unrecognized variety of the Northern Aslian subgroup of the Aslian branch of the Austroasiatic language family. It is spoken by c. 280 individuals in the resettlement area of Sungai Rual, near Jeli in Kelantan state, Peninsular Malaysia. The community originally consisted of several bands of foragers along the middle reaches of the Pergau river. Jedek’s distinct status first became known during a linguistic survey carried out in the DOBES project Tongues of the Semang (2005-2011). This paper describes the process leading up to its discovery and provides an overview of its typological characteristics.
  • Yang, W., Chan, A., Chang, F., & Kidd, E. (2020). Four-year-old Mandarin-speaking children’s online comprehension of relative clauses. Cognition, 196: 104103. doi:10.1016/j.cognition.2019.104103.

    Abstract

    A core question in language acquisition is whether children’s syntactic processing is experience-dependent and language-specific, or whether it is governed by abstract, universal syntactic machinery. We address this question by presenting corpus and on-line processing dat a from children learning Mandarin Chinese, a language that has been important in debates about the universality of parsing processes. The corpus data revealed that two different relative clause constructions in Mandarin are differentially used to modify syntactic subjects and objects. In the experiment, 4-year-old children’s eye-movements were recorded as they listened to the two RC construction types (e.g., Can you pick up the pig that pushed the sheep?). A permutation analysis showed that children’s ease of comprehension was closely aligned with the distributional frequencies, suggesting syntactic processing preferences are shaped by the input experience of these constructions.

    Additional information

    1-s2.0-S001002771930277X-mmc1.pdf
  • Yang, J., Cai, Q., & Tian, X. (2020). How do we segment text? Two-stage chunking operation in reading. eNeuro, 7(3): ENEURO.0425-19.2020. doi:10.1523/ENEURO.0425-19.2020.

    Abstract

    Chunking in language comprehension is a process that segments continuous linguistic input into smaller chunks that are in the reader’s mental lexicon. Effective chunking during reading facilitates disambiguation and enhances efficiency for comprehension. However, the chunking mechanisms remain elusive, especially in reading given that information arrives simultaneously yet the written systems may not have explicit cues for labeling boundaries such as Chinese. What are the mechanisms of chunking that mediates the reading of the text that contains hierarchical information? We investigated this question by manipulating the lexical status of the chunks at distinct levels in four-character Chinese strings, including the two-character local chunk and four-character global chunk. Male and female human participants were asked to make lexical decisions on these strings in a behavioral experiment, followed by a passive reading task when their electroencephalography (EEG) was recorded. The behavioral results showed that the lexical decision time of lexicalized two-character local chunks was influenced by the lexical status of the four-character global chunk, but not vice versa, which indicated the processing of global chunks possessed priority over the local chunks. The EEG results revealed that familiar lexical chunks were detected simultaneously at both levels and further processed in a different temporal order – the onset of lexical access for the global chunks was earlier than that of local chunks. These consistent results suggest a two-stage operation for chunking in reading–– the simultaneous detection of familiar lexical chunks at multiple levels around 100 ms followed by recognition of chunks with global precedence.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., Hino, Y., & Lupker, S. J. (2021). Orthographic properties of distractors do influence phonological Stroop effects: Evidence from Japanese Romaji distractors. Memory & Cognition, 49(3), 600-612. doi:10.3758/s13421-020-01103-8.

    Abstract

    In attempting to understand mental processes, it is important to use a task that appropriately reflects the underlying processes being investigated. Recently, Verdonschot and Kinoshita (Memory & Cognition, 46,410-425, 2018) proposed that a variant of the Stroop task-the "phonological Stroop task"-might be a suitable tool for investigating speech production. The major advantage of this task is that the task is apparently not affected by the orthographic properties of the stimuli, unlike other, commonly used, tasks (e.g., associative-cuing and word-reading tasks). The viability of this proposal was examined in the present experiments by manipulating the script types of Japanese distractors. For Romaji distractors (e.g., "kushi"), color-naming responses were faster when the initial phoneme was shared between the color name and the distractor than when the initial phonemes were different, thereby showing a phoneme-based phonological Stroop effect (Experiment1). In contrast, no such effect was observed when the same distractors were presented in Katakana (e.g., "< ") pound, replicating Verdonschot and Kinoshita's original results (Experiment2). A phoneme-based effect was again found when the Katakana distractors used in Verdonschot and Kinoshita's original study were transcribed and presented in Romaji (Experiment3). Because the observation of a phonemic effectdirectly depended on the orthographic properties of the distractor stimuli, we conclude that the phonological Stroop task is also susceptible to orthographic influences.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., & Hino, Y. (2020). The influence of orthography on speech production: Evidence from masked priming in word-naming and picture-naming tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(8), 1570-1589. doi:10.1037/xlm0000829.

    Abstract

    In a masked priming word-naming task, a facilitation due to the initial-segmental sound overlap for 2-character kanji prime-target pairs was affected by certain orthographic properties (Yoshihara, Nakayama, Verdonschot, & Hino, 2017). That is, the facilitation that was due to the initial mora overlap occurred only when the mora was the whole pronunciation of their initial kanji characters (i.e., match pairs; e.g., /ka-se.ki/-/ka-rjo.ku/). When the shared initial mora was only a part of the kanji characters' readings, however, there was no facilitation (i.e., mismatch pairs; e.g., /ha.tu-a.N/-/ha.ku-bu.tu/). In the present study, we used a masked priming picture-naming task to investigate whether the previous results were relevant only when the orthography of targets is visually presented. In Experiment 1. the main findings of our word-naming task were fully replicated in a picture-naming task. In Experiments 2 and 3. the absence of facilitation for the mismatch pairs were confirmed with a new set of stimuli. On the other hand, a significant facilitation was observed for the match pairs that shared the 2 initial morae (in Experiment 4), which was again consistent with the results of our word-naming study. These results suggest that the orthographic properties constrain the phonological expression of masked priming for kanji words across 2 tasks that are likely to differ in how phonology is retrieved. Specifically, we propose that orthography of a word is activated online and constrains the phonological encoding processes in these tasks.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., & Hino, Y. (2017). The phonological unit of Japanese Kanji compounds: A masked priming investigation. Journal of Experimental Psychology: Human Perception and Performance, 43(7), 1303-1328. doi:10.1037/xhp0000374.

    Abstract

    Using the masked priming paradigm, we examined which phonological unit is used when naming Kanji compounds. Although the phonological unit in the Japanese language has been suggested to be the mora, Experiment 1 found no priming for mora-related Kanji prime-target pairs. In Experiment 2, significant priming was only found when Kanji pairs shared the whole sound of their initial Kanji characters. Nevertheless, when the same Kanji pairs used in Experiment 2 were transcribed into Kana, significant mora priming was observed in Experiment 3. In Experiment 4, matching the syllable structure and pitch-accent of the initial Kanji characters did not lead to mora priming, ruling out potential alternative explanations for the earlier absence of the effect. A significant mora priming effect was observed, however, when the shared initial mora constituted the whole sound of their initial Kanji characters in Experiments 5. Lastly, these results were replicated in Experiment 6. Overall, these results indicate that the phonological unit involved when naming Kanji compounds is not the mora but the whole sound of each Kanji character. We discuss how different phonological units may be involved when processing Kanji and Kana words as well as the implications for theories dealing with language production processes.
  • Zaadnoordijk, L., Buckler, H., Cusack, R., Tsuji, S., & Bergmann, C. (2021). A global perspective on testing infants online: Introducing ManyBabies-AtHome. Frontiers in Psychology, 12: 703234. doi:10.3389/fpsyg.2021.703234.

    Abstract

    Online testing holds great promise for infant scientists. It could increase participant diversity, improve reproducibility and collaborative possibilities, and reduce costs for researchers and participants. However, despite the rise of platforms and participant databases, little work has been done to overcome the challenges of making this approach available to researchers across the world. In this paper, we elaborate on the benefits of online infant testing from a global perspective and identify challenges for the international community that have been outside of the scope of previous literature. Furthermore, we introduce ManyBabies-AtHome, an international, multi-lab collaboration that is actively working to facilitate practical and technical aspects of online testing as well as address ethical concerns regarding data storage and protection, and cross-cultural variation. The ultimate goal of this collaboration is to improve the method of testing infants online and make it globally available.
  • Yu, C., Zhang, Y., Slone, L. K., & Smith, L. B. (2021). The infant’s view redefines the problem of referential uncertainty in early word learning. Proceedings of the National Academy of Sciences of the United States of America, 118(52): e2107019118. doi:10.1073/pnas.2107019118.

    Abstract

    The learning of first object names is deemed a hard problem due to the uncertainty inherent in mapping a heard name to the intended referent in a cluttered and variable world. However, human infants readily solve this problem. Despite considerable theoretical discussion, relatively little is known about the uncertainty infants face in the real world. We used head-mounted eye tracking during parent–infant toy play and quantified the uncertainty by measuring the distribution of infant attention to the potential referents when a parent named both familiar and unfamiliar toy objects. The results show that infant gaze upon hearing an object name is often directed to a single referent which is equally likely to be a wrong competitor or the intended target. This bimodal gaze distribution clarifies and redefines the uncertainty problem and constrains possible solutions.
  • Zhang, Y., Yurovsky, D., & Yu, C. (2021). Cross-situational learning from ambiguous egocentric input is a continuous process: Evidence using the human simulation paradigm. Cognitive Science, 45(7): e13010. doi:10.1111/cogs.13010.

    Abstract

    Recent laboratory experiments have shown that both infant and adult learners can acquire word-referent mappings using cross-situational statistics. The vast majority of the work on this topic has used unfamiliar objects presented on neutral backgrounds as the visual contexts for word learning. However, these laboratory contexts are much different than the real-world contexts in which learning occurs. Thus, the feasibility of generalizing cross-situational learning beyond the laboratory is in question. Adapting the Human Simulation Paradigm, we conducted a series of experiments examining cross-situational learning from children's egocentric videos captured during naturalistic play. Focusing on individually ambiguous naming moments that naturally occur during toy play, we asked how statistical learning unfolds in real time through accumulating cross-situational statistics in naturalistic contexts. We found that even when learning situations were individually ambiguous, learners' performance gradually improved over time. This improvement was driven in part by learners' use of partial knowledge acquired from previous learning situations, even when they had not yet discovered correct word-object mappings. These results suggest that word learning is a continuous process by means of real-time information integration.
  • Zhang, Y., Chen, C.-h., & Yu, C. (2019). Mechanisms of cross-situational learning: Behavioral and computational evidence. In Advances in Child Development and Behavior; vol. 56 (pp. 37-63).

    Abstract

    Word learning happens in everyday contexts with many words and many potential referents for those words in view at the same time. It is challenging for young learners to find the correct referent upon hearing an unknown word at the moment. This problem of referential uncertainty has been deemed as the crux of early word learning (Quine, 1960). Recent empirical and computational studies have found support for a statistical solution to the problem termed cross-situational learning. Cross-situational learning allows learners to acquire word meanings across multiple exposures, despite each individual exposure is referentially uncertain. Recent empirical research shows that infants, children and adults rely on cross-situational learning to learn new words (Smith & Yu, 2008; Suanda, Mugwanya, & Namy, 2014; Yu & Smith, 2007). However, researchers have found evidence supporting two very different theoretical accounts of learning mechanisms: Hypothesis Testing (Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005; Markman, 1992) and Associative Learning (Frank, Goodman, & Tenenbaum, 2009; Yu & Smith, 2007). Hypothesis Testing is generally characterized as a form of learning in which a coherent hypothesis regarding a specific word-object mapping is formed often in conceptually constrained ways. The hypothesis will then be either accepted or rejected with additional evidence. However, proponents of the Associative Learning framework often characterize learning as aggregating information over time through implicit associative mechanisms. A learner acquires the meaning of a word when the association between the word and the referent becomes relatively strong. In this chapter, we consider these two psychological theories in the context of cross-situational word-referent learning. By reviewing recent empirical and cognitive modeling studies, our goal is to deepen our understanding of the underlying word learning mechanisms by examining and comparing the two theoretical learning accounts.
  • Zhen, Z., Kong, X., Huang, L., Yang, Z., Wang, X., Hao, X., Huang, T., Song, Y., & Liu, J. (2017). Quantifying the variability of scene-selective regions: Interindividual, interhemispheric, and sex differences. Human Brain Mapping, 38(4), 2260-2275. doi:10.1002/hbm.23519.

    Abstract

    Scene-selective regions (SSRs), including the parahippocampal place area (PPA), retrosplenial cortex (RSC), and transverse occipital sulcus (TOS), are among the most widely characterized functional regions in the human brain. However, previous studies have mostly focused on the commonality within each SSR, providing little information on different aspects of their variability. In a large group of healthy adults (N = 202), we used functional magnetic resonance imaging to investigate different aspects of topographical and functional variability within SSRs, including interindividual, interhemispheric, and sex differences. First, the PPA, RSC, and TOS were delineated manually for each individual. We then demonstrated that SSRs showed substantial interindividual variability in both spatial topography and functional selectivity. We further identified consistent interhemispheric differences in the spatial topography of all three SSRs, but distinct interhemispheric differences in scene selectivity. Moreover, we found that all three SSRs showed stronger scene selectivity in men than in women. In summary, our work thoroughly characterized the interindividual, interhemispheric, and sex variability of the SSRs and invites future work on the origin and functional significance of these variabilities. Additionally, we constructed the first probabilistic atlases for the SSRs, which provide the detailed anatomical reference for further investigations of the scene network.
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2020). Language selection contributes to intrusion errors in speaking: Evidence from picture naming. Bilingualism: Language and Cognition, 23, 788-800. doi:10.1017/S1366728919000683.

    Abstract

    Bilinguals usually select the right language to speak for the particular context they are in, but sometimes the nontarget language intrudes. Despite a large body of research into language selection and language control, it remains unclear where intrusion errors originate from. These errors may be due to incorrect selection of the nontarget language at the conceptual level, or be a consequence of erroneous word selection (despite correct language selection) at the lexical level. We examined the former possibility in two language switching experiments using a manipulation that supposedly affects language selection on the conceptual level, namely whether the conversational language context was associated with the target language (congruent) or with the alternative language (incongruent) on a trial. Both experiments showed that language intrusion errors occurred more often in incongruent than in congruent contexts, providing converging evidence that language selection during concept preparation is one driving force behind language intrusion.
  • Zheng, X., Roelofs, A., Erkan, H., & Lemhöfer, K. (2020). Dynamics of inhibitory control during bilingual speech production: An electrophysiological study. Neuropsychologia, 140: 107387. doi:10.1016/j.neuropsychologia.2020.107387.

    Abstract

    Bilingual speakers have to control their languages to avoid interference, which may be achieved by enhancing the target language and/or inhibiting the nontarget language. Previous research suggests that bilinguals use inhibition (e.g., Jackson et al., 2001), which should be reflected in the N2 component of the event-related potential (ERP) in the EEG. In the current study, we investigated the dynamics of inhibitory control by measuring the N2 during language switching and repetition in bilingual picture naming. Participants had to name pictures in Dutch or English depending on the cue. A run of same-language trials could be short (two or three trials) or long (five or six trials). We assessed whether RTs and N2 changed over the course of same-language runs, and at a switch between languages. Results showed that speakers named pictures more quickly late as compared to early in a run of same-language trials. Moreover, they made a language switch more quickly after a long run than after a short run. This run-length effect was only present in the first language (L1), not in the second language (L2). In ERPs, we observed a widely distributed switch effect in the N2, which was larger after a short run than after a long run. This effect was only present in the L2, not in the L1, although the difference was not significant between languages. In contrast, the N2 was not modulated during a same-language run. Our results suggest that the nontarget language is inhibited at a switch, but not during the repeated use of the target language.

    Additional information

    Data availability

    Files private

    Request files
  • Zheng, X., & Lemhöfer, K. (2019). The “semantic P600” in second language processing: When syntax conflicts with semantics. Neuropsychologia, 127, 131-147. doi:10.1016/j.neuropsychologia.2019.02.010.

    Abstract

    In sentences like “the mouse that chased the cat was hungry”, the syntactically correct interpretation (the mouse chases the cat) is contradicted by semantic and pragmatic knowledge. Previous research has shown that L1 speakers sometimes base sentence interpretation on this type of knowledge (so-called “shallow” or “good-enough” processing). We made use of both behavioural and ERP measurements to investigate whether L2 learners differ from native speakers in the extent to which they engage in “shallow” syntactic processing. German learners of Dutch as well as Dutch native speakers read sentences containing relative clauses (as in the example above) for which the plausible thematic roles were or were not reversed, and made plausibility judgments. The results show that behaviourally, L2 learners had more difficulties than native speakers to discriminate plausible from implausible sentences. In the ERPs, we replicated the previously reported finding of a “semantic P600” for semantic reversal anomalies in native speakers, probably reflecting the effort to resolve the syntax-semantics conflict. In L2 learners, though, this P600 was largely attenuated and surfaced only in those trials that were judged correctly for plausibility. These results generally point at a more prevalent, but not exclusive occurrence of shallow syntactic processing in L2 learners.
  • Zhong, S., Wei, L., Zhao, C., Yang, L., Di, Z., Francks, C., & Gong, G. (2021). Interhemispheric relationship of genetic influence on human brain connectivity. Cerebral Cortex, 31(1), 77-88. doi:10.1093/cercor/bhaa207.

    Abstract

    To understand the origins of interhemispheric differences and commonalities/coupling in human brain wiring, it is crucial to determine how homologous interregional connectivities of the left and right hemispheres are genetically determined and related. To address this, in the present study, we analyzed human twin and pedigree samples with high-quality diffusion magnetic resonance imaging tractography and estimated the heritability and genetic correlation of homologous left and right white matter (WM) connections. The results showed that the heritability of WM connectivity was similar and coupled between the 2 hemispheres and that the degree of overlap in genetic factors underlying homologous WM connectivity (i.e., interhemispheric genetic correlation) varied substantially across the human brain: from complete overlap to complete nonoverlap. Particularly, the heritability was significantly stronger and the chance of interhemispheric complete overlap in genetic factors was higher in subcortical WM connections than in cortical WM connections. In addition, the heritability and interhemispheric genetic correlations were stronger for long-range connections than for short-range connections. These findings highlight the determinants of the genetics underlying WM connectivity and its interhemispheric relationships, and provide insight into genetic basis of WM connectivity asymmetries in both healthy and disease states.

    Additional information

    Supplementary data
  • Zhou, W., Broersma, M., & Cutler, A. (2021). Asymmetric memory for birth language perception versus production in young international adoptees. Cognition, 213: 104788. doi:10.1016/j.cognition.2021.104788.

    Abstract

    Adults who as children were adopted into a different linguistic community retain knowledge of their birth language. The possession (without awareness) of such knowledge is known to facilitate the (re)learning of birth-language speech patterns; this perceptual learning predicts such adults' production success as well, indicating that the retained linguistic knowledge is abstract in nature. Adoptees' acquisition of their adopted language is fast and complete; birth-language mastery disappears rapidly, although this latter process has been little studied. Here, 46 international adoptees from China aged four to 10 years, with Dutch as their new language, plus 47 matched non-adopted Dutch-native controls and 40 matched non-adopted Chinese controls, undertook across a two-week period 10 blocks of training in perceptually identifying Chinese speech contrasts (one segmental, one tonal) which were unlike any Dutch contrasts. Chinese controls easily accomplished all these tasks. The same participants also provided speech production data in an imitation task. In perception, adoptees and Dutch controls scored equivalently poorly at the outset of training; with training, the adoptees significantly improved while the Dutch controls did not. In production, adoptees' imitations both before and after training could be better identified, and received higher goodness ratings, than those of Dutch controls. The perception results confirm that birth-language knowledge is stored and can facilitate re-learning in post-adoption childhood; the production results suggest that although processing of phonological category detail appears to depend on access to the stored knowledge, general articulatory dimensions can at this age also still be remembered, and may facilitate spoken imitation.

    Additional information

    stimulus materials
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Zimianiti, E. (2021). Adjective-noun constructions in Griko: Focusing on measuring adjectives and their placement in the nominal domain. LingUU Journal, 5(2), 62-75.

    Abstract

    This paper examines adjectival placement in Griko, an Italian-Greek lan-
    guage variety. Guardiano and Stavrou (2019, 2014) have argued that
    there is a gap of evidence in the diachrony of adjectives in prenominal
    position and in particular, of measuring adjectives. Evidence is presented
    in this paper contradicting the aforementioned claims. After considering
    the placement of adjectives in Greek and Italian, and their similarities
    and differences, the adjectival pattern of Griko is analysed. The analysis
    is based mostly on written data from the early 20th century proving the
    prenominal position of adjectives and adding to the diachronic schema of
    adjectival placement in Griko.
  • Zinken, J., Kaiser, J., Weidner, M., Mondada, L., Rossi, G., & Sorjonen, M.-L. (2021). Rule talk: Instructing proper play with impersonal deontic statements. Frontiers in Communication, 6: 660394. doi:10.3389/fcomm.2021.660394.

    Abstract

    The present paper explores how rules are enforced and talked about in everyday life. Drawing on a corpus of board game recordings across European languages, we identify a sequential and praxeological context for rule talk. After a game rule is breached, a participant enforces proper play and then formulates a rule with an impersonal deontic statement (e.g. ‘It’s not allowed to do this’). Impersonal deontic statements express what may or may not be done without tying the obligation to a particular individual. Our analysis shows that such statements are used as part of multi-unit and multi-modal turns where rule talk is accomplished through both grammatical and embodied means. Impersonal deontic statements serve multiple interactional goals: they account for having changed another’s behavior in the moment and at the same time impart knowledge for the future. We refer to this complex action as an “instruction”. The results of this study advance our understanding of rules and rule-following in everyday life, and of how resources of language and the body are combined to enforce and formulate rules.
  • Zinken, J., Rossi, G., & Reddy, V. (2020). Doing more than expected: Thanking recognizes another's agency in providing assistance. In C. Taleghani-Nikazm, E. Betz, & P. Golato (Eds.), Mobilizing others: Grammar and lexis within larger activities (pp. 253-278). Amsterdam: John Benjamins.

    Abstract

    In informal interaction, speakers rarely thank a person who has complied with a request. Examining data from British English, German, Italian, Polish, and Telugu, we ask when speakers do thank after compliance. The results show that thanking treats the other’s assistance as going beyond what could be taken for granted in the circumstances. Coupled with the rareness of thanking after requests, this suggests that cooperation is to a great extent governed by expectations of helpfulness, which can be long-standing, or built over the course of a particular interaction. The higher frequency of thanking in some languages (such as English or Italian) suggests that cultures differ in the importance they place on recognizing the other’s agency in doing as requested.
  • Zora, H., Rudner, M., & Montell Magnusson, A. (2020). Concurrent affective and linguistic prosody with the same emotional valence elicits a late positive ERP response. European Journal of Neuroscience, 51(11), 2236-2249. doi:10.1111/ejn.14658.

    Abstract

    Change in linguistic prosody generates a mismatch negativity response (MMN), indicating neural representation of linguistic prosody, while change in affective prosody generates a positive response (P3a), reflecting its motivational salience. However, the neural response to concurrent affective and linguistic prosody is unknown. The present paper investigates the integration of these two prosodic features in the brain by examining the neural response to separate and concurrent processing by electroencephalography (EEG). A spoken pair of Swedish words—[ˈfɑ́ːsɛn] phase and [ˈfɑ̀ːsɛn] damn—that differed in emotional semantics due to linguistic prosody was presented to 16 subjects in an angry and neutral affective prosody using a passive auditory oddball paradigm. Acoustically matched pseudowords—[ˈvɑ́ːsɛm] and [ˈvɑ̀ːsɛm]—were used as controls. Following the constructionist concept of emotions, accentuating the conceptualization of emotions based on language, it was hypothesized that concurrent affective and linguistic prosody with the same valence—angry [ˈfɑ̀ːsɛn] damn—would elicit a unique late EEG signature, reflecting the temporal integration of affective voice with emotional semantics of prosodic origin. In accordance, linguistic prosody elicited an MMN at 300–350 ms, and affective prosody evoked a P3a at 350–400 ms, irrespective of semantics. Beyond these responses, concurrent affective and linguistic prosody evoked a late positive component (LPC) at 820–870 ms in frontal areas, indicating the conceptualization of affective prosody based on linguistic prosody. This study provides evidence that the brain does not only distinguish between these two functions of prosody but also integrates them based on language and experience.
  • Zora, H., Riad, T., Ylinen, S., & Csépe, V. (2021). Phonological variations are compensated at the lexical level: Evidence from auditory neural activity. Frontiers in Human Neuroscience, 15: 622904. doi:10.3389/fnhum.2021.622904.

    Abstract

    Dealing with phonological variations is important for speech processing. This article addresses whether phonological variations introduced by assimilatory processes are compensated for at the pre-lexical or lexical level, and whether the nature of variation and the phonological context influence this process. To this end, Swedish nasal regressive place assimilation was investigated using the mismatch negativity (MMN) component. In nasal regressive assimilation, the coronal nasal assimilates to the place of articulation of a following segment, most clearly with a velar or labial place of articulation, as in utan mej “without me” > [ʉːtam mɛjː]. In a passive auditory oddball paradigm, 15 Swedish speakers were presented with Swedish phrases with attested and unattested phonological variations and contexts for nasal assimilation. Attested variations – a coronal-to-labial change as in utan “without” > [ʉːtam] – were contrasted with unattested variations – a labial-to-coronal change as in utom “except” > ∗[ʉːtɔn] – in appropriate and inappropriate contexts created by mej “me” [mɛjː] and dej “you” [dɛjː]. Given that the MMN amplitude depends on the degree of variation between two stimuli, the MMN responses were expected to indicate to what extent the distance between variants was tolerated by the perceptual system. Since the MMN response reflects not only low-level acoustic processing but also higher-level linguistic processes, the results were predicted to indicate whether listeners process assimilation at the pre-lexical and lexical levels. The results indicated no significant interactions across variations, suggesting that variations in phonological forms do not incur any cost in lexical retrieval; hence such variation is compensated for at the lexical level. However, since the MMN response reached significance only for a labial-to-coronal change in a labial context and for a coronal-to-labial change in a coronal context, the compensation might have been influenced by the nature of variation and the phonological context. It is therefore concluded that while assimilation is compensated for at the lexical level, there is also some influence from pre-lexical processing. The present results reveal not only signal-based perception of phonological units, but also higher-level lexical processing, and are thus able to reconcile the bottom-up and top-down models of speech processing.
  • Zora, H., Riad, T., & Ylinen, S. (2019). Prosodically controlled derivations in the mental lexicon. Journal of Neurolinguistics, 52: 100856. doi:10.1016/j.jneuroling.2019.100856.

    Abstract

    Swedish morphemes are classified as prosodically specified or prosodically unspecified, depending on lexical or phonological stress, respectively. Here, we investigate the allomorphy of the suffix -(i)sk, which indicates the distinction between lexical and phonological stress; if attached to a lexically stressed morpheme, it takes a non-syllabic form (-sk), whereas if attached to a phonologically stressed morpheme, an epenthetic vowel is inserted (-isk). Using mismatch negativity (MMN), we explored the neural processing of this allomorphy across lexically stressed and phonologically stressed morphemes. In an oddball paradigm, participants were occasionally presented with congruent and incongruent derivations, created by the suffix -(i)sk, within the repetitive presentation of their monomorphemic stems. The results indicated that the congruent derivation of the lexically stressed stem elicited a larger MMN than the incongruent sequences of the same stem and the derivational suffix, whereas after the phonologically stressed stem a non-significant tendency towards an opposite pattern was observed. We argue that the significant MMN response to the congruent derivation in the lexical stress condition is in line with lexical MMN, indicating a holistic processing of the sequence of lexically stressed stem and derivational suffix. The enhanced MMN response to the incongruent derivation in the phonological stress condition, on the other hand, is suggested to reflect combinatorial processing of the sequence of phonologically stressed stem and derivational suffix. These findings bring a new aspect to the dual-system approach to neural processing of morphologically complex words, namely the specification of word stress.
  • Zora, H., & Csépe, V. (2021). Perception of Prosodic Modulations of Linguistic and Paralinguistic Origin: Evidence From Early Auditory Event-Related Potentials. Frontiers in Neuroscience, 15: 797487. doi:10.3389/fnins.2021.797487.

    Abstract

    How listeners handle prosodic cues of linguistic and paralinguistic origin is a central question for spoken communication. In the present EEG study, we addressed this question by examining neural responses to variations in pitch accent (linguistic) and affective (paralinguistic) prosody in Swedish words, using a passive auditory oddball paradigm. The results indicated that changes in pitch accent and affective prosody elicited mismatch negativity (MMN) responses at around 200 ms, confirming the brain’s pre-attentive response to any prosodic modulation. The MMN amplitude was, however, statistically larger to the deviation in affective prosody in comparison to the deviation in pitch accent and affective prosody combined, which is in line with previous research indicating not only a larger MMN response to affective prosody in comparison to neutral prosody but also a smaller MMN response to multidimensional deviants than unidimensional ones. The results, further, showed a significant P3a response to the affective prosody change in comparison to the pitch accent change at around 300 ms, in accordance with previous findings showing an enhanced positive response to emotional stimuli. The present findings provide evidence for distinct neural processing of different prosodic cues, and statistically confirm the intrinsic perceptual and motivational salience of paralinguistic information in spoken communication.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Slow naming of pictures facilitates memory for their names. Psychonomic Bulletin & Review, 26(5), 1675-1682. doi:10.3758/s13423-019-01620-x.

    Abstract

    Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language.
  • Zormpa, E., Brehm, L., Hoedemaker, R. S., & Meyer, A. S. (2019). The production effect and the generation effect improve memory in picture naming. Memory, 27(3), 340-352. doi:10.1080/09658211.2018.1510966.

    Abstract

    The production effect (better memory for words read aloud than words read silently) and the picture superiority effect (better memory for pictures than words) both improve item memory in a picture naming task (Fawcett, J. M., Quinlan, C. K., & Taylor, T. L. (2012). Interplay of the production and picture superiority effects: A signal detection analysis. Memory (Hove, England), 20(7), 655–666. doi:10.1080/09658211.2012.693510). Because picture naming requires coming up with an appropriate label, the generation effect (better memory for generated than read words) may contribute to the latter effect. In two forced-choice memory experiments, we tested the role of generation in a picture naming task on later recognition memory. In Experiment 1, participants named pictures silently or aloud with the correct name or an unreadable label superimposed. We observed a generation effect, a production effect, and an interaction between the two. In Experiment 2, unreliable labels were included to ensure full picture processing in all conditions. In this experiment, we observed a production and a generation effect but no interaction, implying the effects are dissociable. This research demonstrates the separable roles of generation and production in picture naming and their impact on memory. As such, it informs the link between memory and language production and has implications for memory asymmetries between language production and comprehension.

    Additional information

    pmem_a_1510966_sm9257.pdf
  • De Zubicaray, G., & Fisher, S. E. (2017). Genes, Brain, and Language: A brief introduction to the Special Issue. Brain and Language, 172, 1-2. doi:10.1016/j.bandl.2017.08.003.
  • Zuidema, W., French, R. M., Alhama, R. G., Ellis, K., O'Donnell, T. J. O., Sainburgh, T., & Gentner, T. Q. (2020). Five ways in which computational modeling can help advance cognitive science: Lessons from artificial grammar learning. Topics in Cognitive Science, 12(3), 925-941. doi:10.1111/tops.12474.

    Abstract

    There is a rich tradition of building computational models in cognitive science, but modeling, theoretical, and experimental research are not as tightly integrated as they could be. In this paper, we show that computational techniques—even simple ones that are straightforward to use—can greatly facilitate designing, implementing, and analyzing experiments, and generally help lift research to a new level. We focus on the domain of artificial grammar learning, and we give five concrete examples in this domain for (a) formalizing and clarifying theories, (b) generating stimuli, (c) visualization, (d) model selection, and (e) exploring the hypothesis space.
  • Zuidema, W., & Fitz, H. (2019). Key issues and future directions: Models of human language and speech processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 353-358). Cambridge, MA: MIT Press.
  • Zwitserlood, I. (2009). Het Corpus NGT. Levende Talen Magazine, 6, 44-45.

    Abstract

    The Corpus NGT
  • Zwitserlood, I. (2009). Het Corpus NGT en de dagelijkse lespraktijk (1). Levende Talen Magazine, 8, 40-41.

Share this page