Publications

Displaying 1201 - 1290 of 1290
  • van Hell, J. G., & Witteman, M. J. (2009). The neurocognition of switching between languages: A review of electrophysiological studies. In L. Isurin, D. Winford, & K. de Bot (Eds.), Multidisciplinary approaches to code switching (pp. 53-84). Philadelphia: John Benjamins.

    Abstract

    The seemingly effortless switching between languages and the merging of two languages into a coherent utterance is a hallmark of bilingual language processing, and reveals the flexibility of human speech and skilled cognitive control. That skill appears to be available not only to speakers when they produce language-switched utterances, but also to listeners and readers when presented with mixed language information. In this chapter, we review electrophysiological studies in which Event-Related Potentials (ERPs) are derived from recordings of brain activity to examine the neurocognitive aspects of comprehending and producing mixed language. Topics we discuss include the time course of brain activity associated with language switching between single stimuli and language switching of words embedded in a meaningful sentence context. The majority of ERP studies report that switching between languages incurs neurocognitive costs, but –more interestingly- ERP patterns differ as a function of L2 proficiency and the amount of daily experience with language switching, the direction of switching (switching into L2 is typically associated with higher switching costs than switching into L1), the type of language switching task, and the predictability of the language switch. Finally, we outline some future directions for this relatively new approach to the study of language switching.
  • Van Gijn, R. (2009). The phonology of mixed languages. Journal of Pidgin and Creole Languages, 24(1), 91-117. doi:10.1075/jpcl.24.1.04gij.

    Abstract

    Mixed languages are said to be the result of a process of intertwining (e.g. Bakker & Muysken 1995, Bakker 1997), a regular process in which the grammar of one language is combined with the lexicon of another. However, the outcome of this process differs from language pair to language pair. As far as morphosyntax is concerned, people have discussed these different outcomes and the reasons for them extensively, e.g. Bakker 1997 for Michif, Mous 2003 for Ma’a, Muysken 1997a for Media Lengua and 1997b for Callahuaya. The issue of phonology, however, has not generated a large debate. This paper compares the phonological systems of the mixed languages Media Lengua, Callahuaya, Mednyj Aleut, and Michif. It will be argued that the outcome of the process of intertwining, as far as phonology is concerned, is at least partly determined by the extent to which unmixed phonological domains exist.
  • Van Rhijn, J. R. (2019). The role of FoxP2 in striatal circuitry. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (2000). The use of referential context and grammatical gender in parsing: A reply to Brysbaert and Mitchell. Journal of Psycholinguistic Research, 29(5), 467-481. doi:10.1023/A:1005168025226.

    Abstract

    Based on the results of an event-related brain potentials (ERP) experiment (van Berkum, Brown, & Hagoort. 1999a, b), we have recently argued that discourse-level referential context can be taken into account extremely rapidly by the parser. Moreover, our ERP results indicated that local grammatical gender information, although available within a few hundred milliseconds from word onset, is not always used quickly enough to prevent the parser from considering a discourse-supported, but agreement-violating, syntactic analysis. In a comment on our work, Brysbaert and Mitchell (2000) have raised concerns about the methodology of our ERP experiment and have challenged our interpretation of the results. In this reply, we argue that these concerns are unwarranted and, that, in contrast to our own interpretation, the alternative explanations provided by Brysbaert and Mitchell do not account for the full pattern of ERP results.
  • Van Herpt, C., Van der Meulen, M., & Redl, T. (2019). Voorbeeldzinnen kunnen het goede voorbeeld geven. Levende Talen Magazine, 106(4), 18-21.
  • Van Dijk, C. N., Van Wonderen, E., Koutamanis, E., Kootstra, G. J., Dijkstra, T., & Unsworth, S. (2022). Cross-linguistic influence in simultaneous and early sequential bilingual children: A meta-analysis. Journal of Child Language, 49(5), 897-929. doi:10.1017/S0305000921000337.

    Abstract

    Although cross-linguistic influence at the level of morphosyntax is one of the most intensively studied topics in child bilingualism, the circumstances under which it occurs remain unclear. In this meta-analysis, we measured the effect size of cross-linguistic influence and systematically assessed its predictors in 750 simultaneous and early sequential bilingual children in 17 unique language combinations across 26 experimental studies. We found a significant small to moderate average effect size of cross-linguistic influence, indicating that cross-linguistic influence is part and parcel of bilingual development. Language dominance, operationalized as societal language, was a significant predictor of cross-linguistic influence, whereas surface overlap, language domain and age were not. Perhaps an even more important finding was that definitions and operationalisations of cross-linguistic influence and its predictors varied considerably between studies. This could explain the absence of a comprehensive theory in the field. To solve this issue, we argue for a more uniform method of studying cross-linguistic influence.
  • Vanden Bosch der Nederlanden, C. M., Joanisse, M. F., Grahn, J. A., Snijders, T. M., & Schoffelen, J.-M. (2022). Familiarity modulates neural tracking of sung and spoken utterances. NeuroImage, 252: 119049. doi:10.1016/j.neuroimage.2022.119049.

    Abstract

    Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. When participant's subjective ratings of perceived familiarity during the MEG testing session were used to group stimuli, however, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participants’ neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context.

    Additional information

    supplementary materials
  • Varma, S., Takashima, A., Fu, L., & Kessels, R. P. C. (2019). Mindwandering propensity modulates episodic memory consolidation. Aging Clinical and Experimental Research, 31(11), 1601-1607. doi:10.1007/s40520-019-01251-1.

    Abstract

    Research into strategies that can combat episodic memory decline in healthy older adults has gained widespread attention over the years. Evidence suggests that a short period of rest immediately after learning can enhance memory consolidation, as compared to engaging in cognitive tasks. However, a recent study in younger adults has shown that post-encoding engagement in a working memory task leads to the same degree of memory consolidation as from post-encoding rest. Here, we tested whether this finding can be extended to older adults. Using a delayed recognition test, we compared the memory consolidation of word–picture pairs learned prior to 9 min of rest or a 2-Back working memory task, and examined its relationship with executive functioning and mindwandering propensity. Our results show that (1) similar to younger adults, memory for the word–picture associations did not differ when encoding was followed by post-encoding rest or 2-Back task and (2) older adults with higher mindwandering propensity retained more word–picture associations encoded prior to rest relative to those encoded prior to the 2-Back task, whereas participants with lower mindwandering propensity had better memory performance for the pairs encoded prior to the 2-Back task. Overall, our results indicate that the degree of episodic memory consolidation during both active and passive post-encoding periods depends on individual mindwandering tendency.

    Additional information

    Supplementary material
  • Vartiainen, J., Aggujaro, S., Lehtonen, M., Hulten, A., Laine, M., & Salmelin, R. (2009). Neural dynamics of reading morphologically complex words. NeuroImage, 47, 2064-2072. doi:10.1016/j.neuroimage.2009.06.002.

    Abstract

    Despite considerable research interest, it is still an open issue as to how morphologically complex words such as “car+s” are represented and processed in the brain. We studied the neural correlates of the processing of inflected nouns in the morphologically rich Finnish language. Previous behavioral studies in Finnish have yielded a robust inflectional processing cost, i.e., inflected words are harder to recognize than otherwise matched morphologically simple words. Theoretically this effect could stem either from decomposition of inflected words into a stem and a suffix at input level and/or from subsequent recombination at the semantic–syntactic level to arrive at an interpretation of the word. To shed light on this issue, we used magnetoencephalography to reveal the time course and localization of neural effects of morphological structure and frequency of written words. Ten subjects silently read high- and low-frequency Finnish words in inflected and monomorphemic form. Morphological complexity was accompanied by stronger and longerlasting activation of the left superior temporal cortex from 200 ms onwards. Earlier effects of morphology were not found, supporting the view that the well-established behavioral processing cost for inflected words stems from the semantic–syntactic level rather than from early decomposition. Since the effect of morphology was detected throughout the range of word frequencies employed, the majority of inflected Finnish words appears to be represented in decomposed form and only very high-frequency inflected words may acquire full-form representations.
  • Verdonschot, R. G., Tokimoto, S., & Miyaoka, Y. (2019). The fundamental phonological unit of Japanese word production: An EEG study using the picture-word interference paradigm. Journal of Neurolinguistics, 51, 184-193. doi:10.1016/j.jneuroling.2019.02.004.

    Abstract

    It has been shown that in Germanic languages (e.g. English, Dutch) phonemes are the primary (or proximate) planning units during the early stages of phonological encoding. Contrastingly, in Chinese and Japanese the phoneme does not seem to play an important role but rather the syllable (Chinese) and mora (Japanese) are essential. However, despite the lack of behavioral evidence, neurocorrelational studies in Chinese suggested that electrophysiological brain responses (i.e. preceding overt responses) may indicate some significance for the phoneme. We investigated this matter in Japanese and our data shows that unlike in Chinese (for which the literature shows mixed effects), in Japanese both the behavioral and neurocorrelational data indicate an important role only for the mora (and not the phoneme) during the early stages of phonological encoding.
  • Verdonschot, R. G., Phu'o'ng, H. T. L., & Tamaoka, K. (2022). Phonological encoding in Vietnamese: An experimental investigation. Quarterly Journal of Experimental Psychology, 75(7), 1355-1366. doi:10.1177/17470218211053244.

    Abstract

    In English, Dutch, and other Germanic languages the initial phonological unit used in word production has been shown to be the phoneme; conversely, others have revealed that in Chinese this is the atonal syllable and in Japanese the mora. The current paper is, to our knowledge, the first to report chronometric data on Vietnamese phonological encoding. Vietnamese, a tonal language, is of interest as, despite its Austroasiatic roots, it has clear similarities with Chinese through extended contact over a prolonged period. Four experiments (i.e., masked priming, phonological Stroop, picture naming with written distractors, picture naming with auditory distractors) have been conducted to investigate Vietnamese phonological encoding. Results show that in all four experiments both onset effects as well as whole syllable effects emerge. This indicates that the fundamental phonological encoding unit during Vietnamese language production is the phoneme despite its apparent similarities to Chinese. This result might have emerged due to tone assignment being a qualitatively different process in Vietnamese compared to Chinese.
  • Verga, L., Sroka, M. G. U., Varola, M., Villanueva, S., & Ravignani, A. (2022). Spontaneous rhythm discrimination in a mammalian vocal learner. Biology Letters, 18: 20220316. doi:10.1098/rsbl.2022.0316.

    Abstract

    Rhythm and vocal production learning are building blocks of human music and speech. Vocal learning has been hypothesized as a prerequisite for rhythmic capacities. Yet, no mammalian vocal learner but humans have shown the capacity to flexibly and spontaneously discriminate rhythmic patterns. Here we tested untrained rhythm discrimination in a mammalian vocal learning species, the harbour seal (Phoca vitulina). Twenty wild-born seals were exposed to music-like playbacks of conspecific call sequences varying in basic rhythmic properties. These properties were called length, sequence regularity, and overall tempo. All three features significantly influenced seals' reaction (number of looks and their duration), demonstrating spontaneous rhythm discrimination in a vocal learning mammal. This finding supports the rhythm–vocal learning hypothesis and showcases pinnipeds as promising models for comparative research on rhythmic phylogenies.
  • Verga, L., & Kotz, S. A. (2019). Putting language back into ecological communication contexts. Language, Cognition and Neuroscience, 34(4), 536-544. doi:10.1080/23273798.2018.1506886.

    Abstract

    Language is a multi-faceted form of communication. It is not until recently though that language research moved on from simple stimuli and protocols toward a more ecologically valid approach, namely “shifting” from words and simple sentences to stories with varying degrees of contextual complexity. While much needed, the use of ecologically valid stimuli such as stories should also be explored in interactive rather than individualistic experimental settings leading the way to an interactive neuroscience of language. Indeed, mounting evidence suggests that cognitive processes and their underlying neural activity significantly differ between social and individual experiences. We aim at reviewing evidence, which indicates that the characteristics of linguistic and extra-linguistic contexts may significantly influence communication–including spoken language comprehension. In doing so, we provide evidence on the use of new paradigms and methodological advancements that may enable the study of complex language features in a truly interactive, ecological way.
  • Verga, L., & Kotz, S. A. (2019). Spatial attention underpins social word learning in the right fronto-parietal network. NeuroImage, 195, 165-173. doi:10.1016/j.neuroimage.2019.03.071.

    Abstract

    In a multi- and inter-cultural world, we daily encounter new words. Adult learners often rely on a situational context to learn and understand a new word's meaning. Here, we explored whether interactive learning facilitates word learning by directing the learner's attention to a correct new word referent when a situational context is non-informative. We predicted larger involvement of inferior parietal, frontal, and visual cortices involved in visuo-spatial attention during interactive learning. We scanned participants while they played a visual word learning game with and without a social partner. As hypothesized, interactive learning enhanced activity in the right Supramarginal Gyrus when the situational context provided little information. Activity in the right Inferior Frontal Gyrus during interactive learning correlated with post-scanning behavioral test scores, while these scores correlated with activity in the Fusiform Gyrus in the non-interactive group. These results indicate that attention is involved in interactive learning when the situational context is minimal and suggest that individual learning processes may be largely different from interactive ones. As such, they challenge the ecological validity of what we know about individual learning and advocate the exploration of interactive learning in naturalistic settings.
  • Verhagen, J., & Schimke, S. (2009). Differences or fundamental differences? Zeitschrift für Sprachwissenschaft, 28(1), 97-106. doi:10.1515/ZFSW.2009.011.
  • Verhagen, J. (2009). Finiteness in Dutch as a second language. PhD Thesis, VU University, Amsterdam.
  • Verhagen, J. (2009). Light verbs and the acquisition of finiteness and negation in Dutch as a second language. In C. Dimroth, & P. Jordens (Eds.), Functional categories in learner language (pp. 203-234). Berlin: Mouton de Gruyter.
  • Verhagen, J. (2009). Temporal adverbials, negation and finiteness in Dutch as a second language: A scope-based account. IRAL, 47(2), 209-237. doi:10.1515/iral.2009.009.

    Abstract

    This study investigates the acquisition of post-verbal (temporal) adverbials and post-verbal negation in L2 Dutch. It is based on previous findings for L2 French that post-verbal negation poses less of a problem for L2 learners than post-verbal adverbial placement (Hawkins, Towell, Bazergui, Second Language Research 9: 189-233, 1993; Herschensohn, Minimally raising the verb issue: 325-336, Cascadilla Press, 1998). The current data show that, at first sight, Moroccan and Turkish learners of Dutch also have fewer problems with post-verbal negation than with post-verbal adverbials. However, when a distinction is made between different types of adverbials, it seems that this holds for adverbials of position such as 'today' but not for adverbials of contrast such as 'again'. To account for this difference, it is argued that different types of adverbial occupy different positions in the L2 data for reasons of scope marking. Moreover, the placement of adverbials such as 'again' interacts with the acquisition of finiteness marking (resulting in post-verbal placement), while there is no such interaction between adverbials such as 'today' and finiteness marking.
  • Verhoef, E., Demontis, D., Burgess, S., Shapland, C. Y., Dale, P. S., Okbay, A., Neale, B. M., Faraone, S. V., iPSYCH-Broad-PGC ADHD Consortium, Stergiakouli, E., Davey Smith, G., Fisher, S. E., Borglum, A., & St Pourcain, B. (2019). Disentangling polygenic associations between Attention-Deficit/Hyperactivity Disorder, educational attainment, literacy and language. Translational Psychiatry, 9: 35. doi:10.1038/s41398-018-0324-2.

    Abstract

    Interpreting polygenic overlap between ADHD and both literacy-related and language-related impairments is challenging as genetic associations might be influenced by indirectly shared genetic factors. Here, we investigate genetic overlap between polygenic ADHD risk and multiple literacy-related and/or language-related abilities (LRAs), as assessed in UK children (N ≤ 5919), accounting for genetically predictable educational attainment (EA). Genome-wide summary statistics on clinical ADHD and years of schooling were obtained from large consortia (N ≤ 326,041). Our findings show that ADHD-polygenic scores (ADHD-PGS) were inversely associated with LRAs in ALSPAC, most consistently with reading-related abilities, and explained ≤1.6% phenotypic variation. These polygenic links were then dissected into both ADHD effects shared with and independent of EA, using multivariable regressions (MVR). Conditional on EA, polygenic ADHD risk remained associated with multiple reading and/or spelling abilities, phonemic awareness and verbal intelligence, but not listening comprehension and non-word repetition. Using conservative ADHD-instruments (P-threshold < 5 × 10−8), this corresponded, for example, to a 0.35 SD decrease in pooled reading performance per log-odds in ADHD-liability (P = 9.2 × 10−5). Using subthreshold ADHD-instruments (P-threshold < 0.0015), these effects became smaller, with a 0.03 SD decrease per log-odds in ADHD risk (P = 1.4 × 10−6), although the predictive accuracy increased. However, polygenic ADHD-effects shared with EA were of equal strength and at least equal magnitude compared to those independent of EA, for all LRAs studied, and detectable using subthreshold instruments. Thus, ADHD-related polygenic links with LRAs are to a large extent due to shared genetic effects with EA, although there is evidence for an ADHD-specific association profile, independent of EA, that primarily involves literacy-related impairments.

    Additional information

    41398_2018_324_MOESM1_ESM.docx
  • Verkerk, A. (2009). A semantic map of secondary predication. In B. Botma, & J. Van Kampen (Eds.), Linguistics in the Netherlands 2009 (pp. 115-126).
  • Vernes, S. C., Devanna, P., Hörpel, S. G., Alvarez van Tussenbroek, I., Firzlaff, U., Hagoort, P., Hiller, M., Hoeksema, N., Hughes, G. M., Lavrichenko, K., Mengede, J., Morales, A. E., & Wiesmann, M. (2022). The pale spear‐nosed bat: A neuromolecular and transgenic model for vocal learning. Annals of the New York Academy of Sciences, 1517, 125-142. doi:10.1111/nyas.14884.

    Abstract

    Vocal learning, the ability to produce modified vocalizations via learning from acoustic signals, is a key trait in the evolution of speech. While extensively studied in songbirds, mammalian models for vocal learning are rare. Bats present a promising study system given their gregarious natures, small size, and the ability of some species to be maintained in captive colonies. We utilize the pale spear-nosed bat (Phyllostomus discolor) and report advances in establishing this species as a tractable model for understanding vocal learning. We have taken an interdisciplinary approach, aiming to provide an integrated understanding across genomics (Part I), neurobiology (Part II), and transgenics (Part III). In Part I, we generated new, high-quality genome annotations of coding genes and noncoding microRNAs to facilitate functional and evolutionary studies. In Part II, we traced connections between auditory-related brain regions and reported neuroimaging to explore the structure of the brain and gene expression patterns to highlight brain regions. In Part III, we created the first successful transgenic bats by manipulating the expression of FoxP2, a speech-related gene. These interdisciplinary approaches are facilitating a mechanistic and evolutionary understanding of mammalian vocal learning and can also contribute to other areas of investigation that utilize P. discolor or bats as study species.

    Additional information

    supplementary materials
  • Vernes, S. C., MacDermot, K. D., Monaco, A. P., & Fisher, S. E. (2009). Assessing the impact of FOXP1 mutations on developmental verbal dyspraxia. European Journal of Human Genetics, 17(10), 1354-1358. doi:10.1038/ejhg.2009.43.

    Abstract

    Neurodevelopmental disorders that disturb speech and language are highly heritable. Isolation of the underlying genetic risk factors has been hampered by complexity of the phenotype and potentially large number of contributing genes. One exception is the identification of rare heterozygous mutations of the FOXP2 gene in a monogenic syndrome characterised by impaired sequencing of articulatory gestures, disrupting speech (developmental verbal dyspraxia, DVD), as well as multiple deficits in expressive and receptive language. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerisation. FOXP1, the most closely related member of this subgroup, can directly interact with FOXP2 and is co-expressed in neural structures relevant to speech and language disorders. Moreover, investigations of songbird orthologues indicate that combinatorial actions of the two proteins may play important roles in vocal learning, leading to the suggestion that human FOXP1 should be considered a strong candidate for involvement in DVD. Thus, in this study, we screened the entire coding region of FOXP1 (exons and flanking intronic sequence) for nucleotide changes in a panel of probands used earlier to detect novel mutations in FOXP2. A non-synonymous coding change was identified in a single proband, yielding a proline-to-alanine change (P215A). However, this was also found in a random control sample. Analyses of non-coding SNP changes did not find any correlation with affection status. We conclude that FOXP1 mutations are unlikely to represent a major cause of DVD.

    Additional information

    ejhg200943x1.pdf
  • Vernes, S. C. (2019). Neuromolecular approaches to the study of language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 577-593). Cambridge, MA: MIT Press.
  • Vernes, S. C., & Fisher, S. E. (2009). Unravelling neurogenetic networks implicated in developmental language disorders. Biochemical Society Transactions (London), 37, 1263-1269. doi:10.1042/BST0371263.

    Abstract

    Childhood syndromes disturbing language development are common and display high degrees of heritability. In most cases, the underlying genetic architecture is likely to be complex, involving multiple chromosomal loci and substantial heterogeneity, which makes it difficult to track down the crucial genomic risk factors. Investigation of rare Mendelian phenotypes offers a complementary route for unravelling key neurogenetic pathways. The value of this approach is illustrated by the discovery that heterozygous FOXP2 (where FOX is forkhead box) mutations cause an unusual monogenic disorder, characterized by problems with articulating speech along with deficits in expressive and receptive language. FOXP2 encodes a regulatory protein, belonging to the forkhead box family of transcription factors, known to play important roles in modulating gene expression in development and disease. Functional genetics using human neuronal models suggest that the different FOXP2 isoforms generated by alternative splicing have distinct properties and may act to regulate each other's activity. Such investigations have also analysed the missense and nonsense mutations found in cases of speech and language disorder, showing that they alter intracellular localization, DNA binding and transactivation capacity of the mutated proteins. Moreover, in the brains of mutant mice, aetiological mutations have been found to disrupt the synaptic plasticity of Foxp2-expressing circuitry. Finally, although mutations of FOXP2 itself are rare, the downstream networks which it regulates in the brain appear to be broadly implicated in typical forms of language impairment. Thus, through ongoing identification of regulated targets and interacting co-factors, this gene is providing the first molecular entry points into neural mechanisms that go awry in language-related disorders
  • Versace, E., Rogge, J. R., Shelton-May, N., & Ravignani, A. (2019). Positional encoding in cotton-top tamarins (Saguinus oedipus). Animal Cognition, 22, 825-838. doi:10.1007/s10071-019-01277-y.

    Abstract

    Strategies used in artificial grammar learning can shed light into the abilities of different species to extract regularities from the environment. In the A(X)nB rule, A and B items are linked, but assigned to different positional categories and separated by distractor items. Open questions are how widespread is the ability to extract positional regularities from A(X)nB patterns, which strategies are used to encode positional regularities and whether individuals exhibit preferences for absolute or relative position encoding. We used visual arrays to investigate whether cotton-top tamarins (Saguinusoedipus) can learn this rule and which strategies they use. After training on a subset of exemplars, two of the tested monkeys successfully generalized to novel combinations. These tamarins discriminated between categories of tokens with different properties (A, B, X) and detected a positional relationship between non-adjacent items even in the presence of novel distractors. The pattern of errors revealed that successful subjects used visual similarity with training stimuli to solve the task and that successful tamarins extracted the relative position of As and Bs rather than their absolute position, similarly to what has been observed in other species. Relative position encoding appears to be favoured in different tasks and taxa. Generalization, though, was incomplete, since we observed a failure with items that during training had always been presented in reinforced arrays, showing the limitations in grasping the underlying positional rule. These results suggest the use of local strategies in the extraction of positional rules in cotton-top tamarins.

    Additional information

    Supplementary file
  • Verspeek, J., Staes, N., Van Leeuwen, E. J. C., Eens, M., & Stevens, J. M. G. (2019). Bonobo personality predicts friendship. Scientific Reports, 9: 19245. doi:10.1038/s41598-019-55884-3.

    Abstract

    In bonobos, strong bonds have been documented between unrelated females and between mothers
    and their adult sons, which can have important fitness benefits. Often age, sex or kinship similarity
    have been used to explain social bond strength variation. Recent studies in other species also stress
    the importance of personality, but this relationship remains to be investigated in bonobos. We used
    behavioral observations on 39 adult and adolescent bonobos housed in 5 European zoos to study the
    role of personality similarity in dyadic relationship quality. Dimension reduction analyses on individual
    and dyadic behavioral scores revealed multidimensional personality (Sociability, Openness, Boldness,
    Activity) and relationship quality components (value, compatibility). We show that, aside from
    relatedness and sex combination of the dyad, relationship quality is also associated with personality
    similarity of both partners. While similarity in Sociability resulted in higher relationship values, lower
    relationship compatibility was found between bonobos with similar Activity scores. The results of this
    study expand our understanding of the mechanisms underlying social bond formation in anthropoid
    apes. In addition, we suggest that future studies in closely related species like chimpanzees should
    implement identical methods for assessing bond strength to shed further light on the evolution of this
    phenomenon.

    Additional information

    Supplementary material
  • Vessel, E. A., Ishizu, T., & Bignardi, G. (2022). Neural correlates of visual aesthetic appeal. In M. Skov, & M. Nadal (Eds.), The Routledge international handbook of neuroaesthetics (pp. 103-133). London: Routledge.
  • De Vignemont, F., Majid, A., Jola, C., & Haggard, P. (2009). Segmenting the body into parts: Evidence from biases in tactile perception. Quarterly Journal of Experimental Psychology, 62, 500-512. doi:10.1080/17470210802000802.

    Abstract

    How do we individuate body parts? Here, we investigated the effect of body segmentation between hand and arm in tactile and visual perception. In a first experiment, we showed that two tactile stimuli felt farther away when they were applied across the wrist than when they were applied within a single body part (palm or forearm), indicating a “category boundary effect”. In the following experiments, we excluded two hypotheses, which attributed tactile segmentation to other, nontactile factors. In Experiment 2, we showed that the boundary effect does not arise from motor cues. The effect was reduced during a motor task involving flexion and extension movements of the wrist joint. Action brings body parts together into functional units, instead of pulling them apart. In Experiments 3 and 4, we showed that the effect does not arise from perceptual cues of visual discontinuities. We did not find any segmentation effect for the visual percept of the body in Experiment 3, nor for a neutral shape in Experiment 4. We suggest that the mental representation of the body is structured in categorical body parts delineated by joints, and that this categorical representation modulates tactile spatial perception.
  • Visser, I., Bergmann, C., Byers-Heinlein, K., Dal Ben, R., Duch, W., Forbes, S., Franchin, L., Frank, M., Geraci, A., Hamlin, J. K., Kaldy, Z., Kulke, L., Laverty, C., Lew-Williams, C., Mateu, V., Mayor, J., Moreau, D., Nomikou, I., Schuwerk, T., Simpson, E. and 8 moreVisser, I., Bergmann, C., Byers-Heinlein, K., Dal Ben, R., Duch, W., Forbes, S., Franchin, L., Frank, M., Geraci, A., Hamlin, J. K., Kaldy, Z., Kulke, L., Laverty, C., Lew-Williams, C., Mateu, V., Mayor, J., Moreau, D., Nomikou, I., Schuwerk, T., Simpson, E., Singh, L., Soderstrom, M., Sullivan, J., Van den Heuvel, M. I., Westermann, G., Yamada, Y., Zaadnoordijk, L., & Zettersten, M. (2022). Improving the generalizability of infant psychological research: The ManyBabies model. Behavioral and Brain Sciences, 45: e35. doi:10.1017/S0140525X21000455.

    Abstract

    Yarkoni’s analysis clearly articulates a number of concerns limiting the generalizability and explanatory power of psychological findings, many of which are compounded in infancy research. ManyBabies addresses these concerns via a radically collaborative, large-scale and open approach to research that is grounded in theory-building, committed to diversification, and focused on understanding sources of variation.
  • Vogelezang, S., Bradfield, J. P., the Early Growth Genetics Consortium, Grant, S. F. A., Felix, J. F., & Jaddoe, V. W. V. (2022). Genetics of early-life head circumference and genetic correlations with neurological, psychiatric and cognitive outcomes. BMC Medical Genomics, 15: 124. doi:10.1186/s12920-022-01281-1.

    Abstract

    Background

    Head circumference is associated with intelligence and tracks from childhood into adulthood.
    Methods

    We performed a genome-wide association study meta-analysis and follow-up of head circumference in a total of 29,192 participants between 6 and 30 months of age.
    Results

    Seven loci reached genome-wide significance in the combined discovery and replication analysis of which three loci near ARFGEF2, MYCL1, and TOP1, were novel. We observed positive genetic correlations for early-life head circumference with adult intracranial volume, years of schooling, childhood and adult intelligence, but not with adult psychiatric, neurological, or personality-related phenotypes.
    Conclusions

    The results of this study indicate that the biological processes underlying early-life head circumference overlap largely with those of adult head circumference. The associations of early-life head circumference with cognitive outcomes across the life course are partly explained by genetics.
  • Von Stutterheim, C., Carroll, M., & Klein, W. (2009). New perspectives in analyzing aspectual distinctions across languages. In W. Klein, & P. Li (Eds.), The expression of time (pp. 195-216). Berlin: Mouton de Gruyter.
  • De Vos, C., Casillas, M., Uittenbogert, T., Crasborn, O., & Levinson, S. C. (2022). Predicting conversational turns: Signers’ and non-signers’ sensitivity to language-specific and globally accessible cues. Language, 98(1), 35-62. doi:10.1353/lan.2021.0085.

    Abstract

    Precision turn-taking may constitute a crucial part of the human endowment for communication. If so, it should be implemented similarly across language modalities, as in signed vs. spoken language. Here in the first experimental study of turn-end prediction in sign language, we find support for the idea that signed language, like spoken language, involves turn-type prediction and turn-end anticipation. In both cases, turns eliciting specific responses like questions accelerate anticipation. We also show remarkable cross-modality predictive capacity: non-signers anticipate sign turn-ends surprisingly well. Finally, we show that despite non-signers’ ability to intuitively predict signed turn-ends, early native signers do it much better by using their access to linguistic signals (here, question markers). As shown in prior work, question formation facilitates prediction, and age of sign language acquisition affects accuracy. The study thus sheds light on the kind of features that may facilitate turn-taking universally, and those that are language-specific.

    Additional information

    public summary
  • De Vos, C. (2009). [Review of the book Language complexity as an evolving variable ed. by Geoffrey Sampson, David Gil and Peter Trudgill]. LINGUIST List, 20.4275. Retrieved from http://linguistlist.org/issues/20/20-4275.html.
  • De Vos, C., Van der Kooij, E., & Crasborn, O. (2009). Mixed signals: Combining linguistic and affective functions of eyebrows in questions in Sign Language of the Netherlands. Language and Speech, 52(2/3), 315-339. doi:10.1177/0023830909103177.

    Abstract

    The eyebrows are used as conversational signals in face-to-face spoken interaction (Ekman, 1979). In Sign Language of the Netherlands (NGT), the eyebrows are typically furrowed in content questions, and raised in polar questions (Coerts, 1992). On the other hand, these eyebrow positions are also associated with anger and surprise, respectively, in general human communication (Ekman, 1993). This overlap in the functional load of the eyebrow positions results in a potential conflict for NGT signers when combining these functions simultaneously. In order to investigate the effect of the simultaneous realization of both functions on the eyebrow position we elicited instances of both question types with neutral affect and with various affective states. The data were coded using the Facial Action Coding System (FACS: Ekman, Friesen, & Hager, 2002) for type of brow movement as well as for intensity. FACS allows for the coding of muscle groups, which are termed Action Units (AUs) and which produce facial appearance changes. The results show that linguistic and affective functions of eyebrows may influence each other in NGT. That is, in surprised polar questions and angry content question a phonetic enhancement takes place of raising and furrowing, respectively. In the items with contrasting eyebrow movements, the grammatical and affective AUs are either blended (occur simultaneously) or they are realized sequentially. Interestingly, the absence of eyebrow raising (marked by AU 1+2) in angry polar questions, and the presence of eyebrow furrowing (realized by AU 4) in surprised content questions suggests that in general AU 4 may be phonetically stronger than AU 1 and AU 2, independent of its linguistic or affective function.
  • De Vos, J., Schriefers, H., Bosch, L. t., & Lemhöfer, K. (2019). Interactive L2 vocabulary acquisition in a lab-based immersion setting. Language, Cognition and Neuroscience, 34(7), 916-935. doi:10.1080/23273798.2019.1599127.

    Abstract

    ABSTRACTWe investigated to what extent L2 word learning in spoken interaction takes place when learners are unaware of taking part in a language learning study. Using a novel paradigm for approximating naturalistic (but not necessarily non-intentional) L2 learning in the lab, German learners of Dutch were led to believe that the study concerned judging the price of objects. Dutch target words (object names) were selected individually such that these words were unknown to the respective participant. Then, in a dialogue-like task with the experimenter, the participants were first exposed to and then tested on the target words. In comparison to a no-input control group, we observed a clear learning effect especially from the first two exposures, and better learning for cognates than for non-cognates, but no modulating effect of the exposure-production lag. Moreover, some of the acquired knowledge persisted over a six-month period.
  • De Vos, J. (2019). Naturalistic word learning in a second language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Vosse, T., & Kempen, G. (2009). In defense of competition during syntactic ambiguity resolution. Journal of Psycholinguistic Research, 38(1), 1-9. doi:10.1007/s10936-008-9075-1.

    Abstract

    In a recent series of publications (Traxler et al. J Mem Lang 39:558–592, 1998; Van Gompel et al. J Mem Lang 52:284–307, 2005; see also Van Gompel et al. (In: Kennedy, et al.(eds) Reading as a perceptual process, Oxford, Elsevier pp 621–648, 2000); Van Gompel et al. J Mem Lang 45:225–258, 2001) eye tracking data are reported showing that globally ambiguous (GA) sentences are read faster than locally ambiguous (LA) counterparts. They argue that these data rule out “constraint-based” models where syntactic and conceptual processors operate concurrently and syntactic ambiguity resolution is accomplished by competition. Such models predict the opposite pattern of reading times. However, this argument against competition is valid only in conjunction with two standard assumptions in current constraint-based models of sentence comprehension: (1) that syntactic competitions (e.g., Which is the best attachment site of the incoming constituent?) are pooled together with conceptual competitions (e.g., Which attachment site entails the most plausible meaning?), and (2) that the duration of a competition is a function of the overall (pooled) quality score obtained by each competitor. We argue that it is not necessary to abandon competition as a successful basis for explaining parsing phenomena and that the above-mentioned reading time data can be accounted for by a parallel-interactive model with conceptual and syntactic processors that do not pool their quality scores together. Within the individual linguistic modules, decision-making can very well be competition-based.
  • Vosse, T., & Kempen, G. (2000). Syntactic structure assembly in human parsing: A computational model based on competitive inhibition and a lexicalist grammar. Cognition, 75, 105-143.

    Abstract

    We present the design, implementation and simulation results of a psycholinguistic model of human syntactic processing that meets major empirical criteria. The parser operates in conjunction with a lexicalist grammar and is driven by syntactic information associated with heads of phrases. The dynamics of the model are based on competition by lateral inhibition ('competitive inhibition'). Input words activate lexical frames (i.e. elementary trees anchored to input words) in the mental lexicon, and a network of candidate 'unification links' is set up between frame nodes. These links represent tentative attachments that are graded rather than all-or-none. Candidate links that, due to grammatical or 'treehood' constraints, are incompatible, compete for inclusion in the final syntactic tree by sending each other inhibitory signals that reduce the competitor's attachment strength. The outcome of these local and simultaneous competitions is controlled by dynamic parameters, in particular by the Entry Activation and the Activation Decay rate of syntactic nodes, and by the Strength and Strength Build-up rate of Unification links. In case of a successful parse, a single syntactic tree is returned that covers the whole input string and consists of lexical frames connected by winning Unification links. Simulations are reported of a significant range of psycholinguistic parsing phenomena in both normal and aphasic speakers of English: (i) various effects of linguistic complexity (single versus double, center versus right-hand self-embeddings of relative clauses; the difference between relative clauses with subject and object extraction; the contrast between a complement clause embedded within a relative clause versus a relative clause embedded within a complement clause); (ii) effects of local and global ambiguity, and of word-class and syntactic ambiguity (including recency and length effects); (iii) certain difficulty-of-reanalysis effects (contrasts between local ambiguities that are easy to resolve versus ones that lead to serious garden-path effects); (iv) effects of agrammatism on parsing performance, in particular the performance of various groups of aphasic patients on several sentence types.
  • Vosse, T., & Kempen, G. (2009). The Unification Space implemented as a localist neural net: Predictions and error-tolerance in a constraint-based parser. Cognitive Neurodynamics, 3, 331-346. doi:10.1007/s11571-009-9094-0.

    Abstract

    We introduce a novel computer implementation of the Unification-Space parser (Vosse & Kempen 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen & Harbusch 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least in a qualitative and rudimentary sense, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described.
  • Wagner, M. A., Broersma, M., McQueen, J. M., & Lemhöfer, K. (2019). Imitating speech in an unfamiliar language and an unfamiliar non-native accent in the native language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1362-1366). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This study concerns individual differences in speech imitation ability and the role that lexical representations play in imitation. We examined 1) whether imitation of sounds in an unfamiliar language (L0) is related to imitation of sounds in an unfamiliar
    non-native accent in the speaker’s native language (L1) and 2) whether it is easier or harder to imitate speech when you know the words to be imitated. Fifty-nine native Dutch speakers imitated words with target vowels in Basque (/a/ and /e/) and Greekaccented
    Dutch (/i/ and /u/). Spectral and durational
    analyses of the target vowels revealed no relationship between the success of L0 and L1 imitation and no difference in performance between tasks (i.e., L1
    imitation was neither aided nor blocked by lexical knowledge about the correct pronunciation). The results suggest instead that the relationship of the vowels to native phonological categories plays a bigger role in imitation
  • Wang, L., Hagoort, P., & Yang, Y. (2009). Semantic illusion depends on information structure: ERP evidence. Brain Research, 1282, 50-56. doi:10.1016/j.brainres.2009.05.069.

    Abstract

    Next to propositional content, speakers distribute information in their utterances in such a way that listeners can make a distinction between new (focused) and given (non-focused) information. This is referred to as information structure. We measured event-related potentials (ERPs) to explore the role of information structure in semantic processing. Following different questions in wh-question-answer pairs (e.g. What kind of vegetable did Ming buy for cooking today? /Who bought the vegetables for cooking today?), the answer sentences (e.g., Ming bought eggplant/beef to cook today.) contained a critical word, which was either semantically appropriate (eggplant) or inappropriate (beef), and either focus or non-focus. The results showed a full N400 effect only when the critical words were in focus position. In non-focus position a strongly reduced N400 effect was observed, in line with the well-known semantic illusion effect. The results suggest that information structure facilitates semantic processing by devoting more resources to focused information.
  • Wanner-Kawahara, J., Yoshihara, M., Lupker, S. J., Verdonschot, R. G., & Nakayama, M. (2022). Morphological priming effects in L2 English verbs for Japanese-English bilinguals. Frontiers in Psychology, 13: 742965. doi:10.3389/fpsyg.2022.742965.

    Abstract

    For native (L1) English readers, masked presentations of past-tense verb primes (e.g., fell and looked) produce faster lexical decision latencies to their present-tense targets (e.g., FALL and LOOK) than orthographically related (e.g., fill and loose) or unrelated (e.g., master and bank) primes. This facilitation observed with morphologically related prime-target pairs (morphological priming) is generally taken as evidence for strong connections based on morphological relationships in the L1 lexicon. It is unclear, however, if similar, morphologically based, connections develop in non-native (L2) lexicons. Several earlier studies with L2 English readers have reported mixed results. The present experiments examine whether past-tense verb primes (both regular and irregular verbs) significantly facilitate target lexical decisions for Japanese-English bilinguals beyond any facilitation provided by prime-target orthographic similarity. Overall, past-tense verb primes facilitated lexical decisions to their present-tense targets relative to both orthographically related and unrelated primes. Replicating previous masked priming experiments with L2 readers, orthographically related primes also facilitated target recognition relative to unrelated primes, confirming that orthographic similarity facilitates L2 target recognition. The additional facilitation from past-tense verb primes beyond that provided by orthographic primes suggests that, in the L2 English lexicon, connections based on morphological relationships develop in a way that is similar to how they develop in the L1 English lexicon even though the connections and processing of lower level, lexical/orthographic information may differ. Further analyses involving L2 proficiency revealed that as L2 proficiency increased, orthographic facilitation was reduced, indicating that there is a decrease in the fuzziness in orthographic representations in the L2 lexicon with increased proficiency.

    Additional information

    supplementary material
  • Warner, N., Fountain, A., & Tucker, B. V. (2009). Cues to perception of reduced flaps. Journal of the Acoustical Society of America, 125(5), 3317-3327. doi:10.1121/1.3097773.

    Abstract

    Natural, spontaneous speech (and even quite careful speech) often shows extreme reduction in many speech segments, even resulting in apparent deletion of consonants. Where the flap ([(sic)]) allophone of /t/ and /d/ is expected in American English, one frequently sees an approximant-like or even vocalic pattern, rather than a clear flap. Still, the /t/ or /d/ is usually perceived, suggesting the acoustic characteristics of a reduced flap are sufficient for perception of a consonant. This paper identifies several acoustic characteristics of reduced flaps based on previous acoustic research (size of intensity dip, consonant duration, and F4 valley) and presents phonetic identification data for continua that manipulate these acoustic characteristics of reduction. The results indicate that the most obvious types of acoustic variability seen in natural flaps do affect listeners' percept of a consonant, but not sufficiently to completely account for the percept. Listeners are affected by the acoustic characteristics of consonant reduction, but they are also very skilled at evaluating variability along the acoustic dimensions that realize reduction.

    Files private

    Request files
  • Warner, N., Luna, Q., Butler, L., & Van Volkinburg, H. (2009). Revitalization in a scattered language community: Problems and methods from the perspective of Mutsun language revitalization. International Journal of the Sociology of Language, 198, 135-148. doi:10.1515/IJSL.2009.031.

    Abstract

    This article addresses revitalization of a dormant language whose prospective speakers live in scattered geographical areas. In comparison to increasing the usage of an endangered language, revitalizing a dormant language (one with no living speakers) requires different methods to gain knowledge of the language. Language teaching for a dormant language with a scattered community presents different problems from other teaching situations. In this article, we discuss the types of tasks that must be accomplished for dormant-language revitalization, with particular focus on development of teaching materials. We also address the role of computer technologies, arguing that each use of technology should be evaluated for how effectively it increases fluency. We discuss methods for achieving semi-fluency for the first new speakers of a dormant language, and for spreading the language through the community.
  • Warren, C. M., Tona, K. D., Ouwekerk, L., Van Paridon, J., Poletiek, F. H., Bosch, J. A., & Nieuwenhuis, S. (2019). The neuromodulatory and hormonal effects of transcutaneous vagus nerve stimulation as evidenced by salivary alpha amylase, salivary cortisol, pupil diameter, and the P3 event-related potential. Brain Stimulation, 12(3), 635-642. doi:10.1016/j.brs.2018.12.224.

    Abstract

    Background

    Transcutaneous vagus nerve stimulation (tVNS) is a new, non-invasive technique being investigated as an intervention for a variety of clinical disorders, including epilepsy and depression. It is thought to exert its therapeutic effect by increasing central norepinephrine (NE) activity, but the evidence supporting this notion is limited.
    Objective

    In order to test for an impact of tVNS on psychophysiological and hormonal indices of noradrenergic function, we applied tVNS in concert with assessment of salivary alpha amylase (SAA) and cortisol, pupil size, and electroencephalograph (EEG) recordings.
    Methods

    Across three experiments, we applied real and sham tVNS to 61 healthy participants while they performed a set of simple stimulus-discrimination tasks. Before and after the task, as well as during one break, participants provided saliva samples and had their pupil size recorded. EEG was recorded throughout the task. The target for tVNS was the cymba conchae, which is heavily innervated by the auricular branch of the vagus nerve. Sham stimulation was applied to the ear lobe.
    Results

    P3 amplitude was not affected by tVNS (Experiment 1A: N=24; Experiment 1B: N=20; Bayes factor supporting null model=4.53), nor was pupil size (Experiment 2: N=16; interaction of treatment and time: p=0.79). However, tVNS increased SAA (Experiments 1A and 2: N=25) and attenuated the decline of salivary cortisol compared to sham (Experiment 2: N=17), as indicated by significant interactions involving treatment and time (p=.023 and p=.040, respectively).
    Conclusion

    These findings suggest that tVNS modulates hormonal indices but not psychophysiological indices of noradrenergic function.
  • Weber, A. (1998). Listening to nonnative language which violates native assimilation rules. In D. Duez (Ed.), Proceedings of the European Scientific Communication Association workshop: Sound patterns of Spontaneous Speech (pp. 101-104).

    Abstract

    Recent studies using phoneme detection tasks have shown that spoken-language processing is neither facilitated nor interfered with by optional assimilation, but is inhibited by violation of obligatory assimilation. Interpretation of these results depends on an assessment of their generality, specifically, whether they also obtain when listeners are processing nonnative language. Two separate experiments are presented in which native listeners of German and native listeners of Dutch had to detect a target fricative in legal monosyllabic Dutch nonwords. All of the nonwords were correct realisations in standard Dutch. For German listeners, however, half of the nonwords contained phoneme strings which violate the German fricative assimilation rule. Whereas the Dutch listeners showed no significant effects, German listeners detected the target fricative faster when the German fricative assimilation was violated than when no violation occurred. The results might suggest that violation of assimilation rules does not have to make processing more difficult per se.
  • Weber, A. (2000). Phonotactic and acoustic cues for word segmentation in English. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP 2000) (pp. 782-785).

    Abstract

    This study investigates the influence of both phonotactic and acoustic cues on the segmentation of spoken English. Listeners detected embedded English words in nonsense sequences (word spotting). Words aligned with phonotactic boundaries were easier to detect than words without such alignment. Acoustic cues to boundaries could also have signaled word boundaries, especially when word onsets lacked phonotactic alignment. However, only one of several durational boundary cues showed a marginally significant correlation with response times (RTs). The results suggest that word segmentation in English is influenced primarily by phonotactic constraints and only secondarily by acoustic aspects of the speech signal.
  • Weber, K., Christiansen, M., Indefrey, P., & Hagoort, P. (2019). Primed from the start: Syntactic priming during the first days of language learning. Language Learning, 69(1), 198-221. doi:10.1111/lang.12327.

    Abstract

    New linguistic information must be integrated into our existing language system. Using a novel experimental task that incorporates a syntactic priming paradigm into artificial language learning, we investigated how new grammatical regularities and words are learned. This innovation allowed us to control the language input the learner received, while the syntactic priming paradigm provided insight into the nature of the underlying syntactic processing machinery. The results of the present study pointed to facilitatory syntactic processing effects within the first days of learning: Syntactic and lexical priming effects revealed participants’ sensitivity to both novel words and word orders. This suggested that novel syntactic structures and their meaning (form–function mapping) can be acquired rapidly through incidental learning. More generally, our study indicated similar mechanisms for learning and processing in both artificial and natural languages, with implications for the relationship between first and second language learning.
  • Weber, K., Micheli, C., Ruigendijk, E., & Rieger, J. (2019). Sentence processing is modulated by the current linguistic environment and a priori information: An fMRI study. Brain and Behavior, 9(7): e01308. doi:10.1002/brb3.1308.

    Abstract

    Introduction
    Words are not processed in isolation but in rich contexts that are used to modulate and facilitate language comprehension. Here, we investigate distinct neural networks underlying two types of contexts, the current linguistic environment and verb‐based syntactic preferences.

    Methods
    We had two main manipulations. The first was the current linguistic environment, where the relative frequencies of two syntactic structures (prepositional object [PO] and double‐object [DO]) would either follow everyday linguistic experience or not. The second concerned the preference toward one or the other structure depending on the verb; learned in everyday language use and stored in memory. German participants were reading PO and DO sentences in German while brain activity was measured with functional magnetic resonance imaging.

    Results
    First, the anterior cingulate cortex (ACC) showed a pattern of activation that integrated the current linguistic environment with everyday linguistic experience. When the input did not match everyday experience, the unexpected frequent structure showed higher activation in the ACC than the other conditions and more connectivity from the ACC to posterior parts of the language network. Second, verb‐based surprisal of seeing a structure given a verb (PO verb preference but DO structure presentation) resulted, within the language network (left inferior frontal and left middle/superior temporal gyrus) and the precuneus, in increased activation compared to a predictable verb‐structure pairing.

    Conclusion
    In conclusion, (1) beyond the canonical language network, brain areas engaged in prediction and error signaling, such as the ACC, might use the statistics of syntactic structures to modulate language processing, (2) the language network is directly engaged in processing verb preferences. These two networks show distinct influences on sentence processing.

    Additional information

    Supporting information
  • Weber, K., & Indefrey, P. (2009). Syntactic priming in German–English bilinguals during sentence comprehension. Neuroimage, 46, 1164-1172. doi:10.1016/j.neuroimage.2009.03.040.

    Abstract

    A longstanding question in bilingualism is whether syntactic information is shared between the two language processing systems. We used an fMRI repetition suppression paradigm to investigate syntactic priming in reading comprehension in German–English late-acquisition bilinguals. In comparison to conventional subtraction analyses in bilingual experiments, repetition suppression has the advantage of being able to detect neuronal populations that are sensitive to properties that are shared by consecutive stimuli. In this study, we manipulated the syntactic structure between prime and target sentences. A sentence with a passive sentence structure in English was preceded either by a passive or by an active sentence in English or German. We looked for repetition suppression effects in left inferior frontal, left precentral and left middle temporal regions of interest. These regions were defined by a contrast of all non-target sentences in German and English versus the baseline of sentence-format consonant strings. We found decreases in activity (repetition suppression effects) in these regions of interest following the repetition of syntactic structure from the first to the second language and within the second language.
    Moreover, a separate behavioural experiment using a word-by-word reading paradigm similar to the fMRI experiment showed faster reading times for primed compared to unprimed English target sentences regardless of whether they were preceded by an English or a German sentence of the same structure.
    We conclude that there is interaction between the language processing systems and that at least some syntactic information is shared between a bilingual's languages with similar syntactic structures.

    Files private

    Request files
  • Weber, A. (2009). The role of linguistic experience in lexical recognition [Abstract]. Journal of the Acoustical Society of America, 125, 2759.

    Abstract

    Lexical recognition is typically slower in L2 than in L1. Part of the difficulty comes from a not precise enough processing of L2 phonemes. Consequently, L2 listeners fail to eliminate candidate words that L1 listeners can exclude from competing for recognition. For instance, the inability to distinguish /r/ from /l/ in rocket and locker makes for Japanese listeners both words possible candidates when hearing their onset (e.g., Cutler, Weber, and Otake, 2006). The L2 disadvantage can, however, be dispelled: For L2 listeners, but not L1 listeners, L2 speech from a non-native talker with the same language background is known to be as intelligible as L2 speech from a native talker (e.g., Bent and Bradlow, 2003). A reason for this may be that L2 listeners have ample experience with segmental deviations that are characteristic for their own accent. On this account, only phonemic deviations that are typical for the listeners’ own accent will cause spurious lexical activation in L2 listening (e.g., English magic pronounced as megic for Dutch listeners). In this talk, I will present evidence from cross-modal priming studies with a variety of L2 listener groups, showing how the processing of phonemic deviations is accent-specific but withstands fine phonetic differences.
  • Weber, A. (2000). The role of phonotactics in the segmentation of native and non-native continuous speech. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP, Workshop on Spoken Word Access Processes. Nijmegen: MPI for Psycholinguistics.

    Abstract

    Previous research has shown that listeners make use of their knowledge of phonotactic constraints to segment speech into individual words. The present study investigates the influence of phonotactics when segmenting a non-native language. German and English listeners detected embedded English words in nonsense sequences. German listeners also had knowledge of English, but English listeners had no knowledge of German. Word onsets were either aligned with a syllable boundary or not, according to the phonotactics of the two languages. Words aligned with either German or English phonotactic boundaries were easier for German listeners to detect than words without such alignment. Responses of English listeners were influenced primarily by English phonotactic alignment. The results suggest that both native and non-native phonotactic constraints influence lexical segmentation of a non-native, but familiar, language.
  • Wells, J. B., Christiansen, M. H., Race, D. S., Acheson, D. J., & MacDonald, M. C. (2009). Experience and sentence processing: Statistical learning and relative clause comprehension. Cognitive Psychology, 58(2), 250-271. doi:10.1016/j.cogpsych.2008.08.002.

    Abstract

    Many explanations of the difficulties associated with interpreting object relative clauses appeal to the demands that object relatives make on working memory. MacDonald and Christiansen [MacDonald, M. C., & Christiansen, M. H. (2002). Reassessing working memory: Comment on Just and Carpenter (1992) and Waters and Caplan (1996). Psychological Review, 109, 35-54] pointed to variations in reading experience as a source of differences, arguing that the unique word order of object relatives makes their processing more difficult and more sensitive to the effects of previous experience than the processing of subject relatives. This hypothesis was tested in a large-scale study manipulating reading experiences of adults over several weeks. The group receiving relative clause experience increased reading speeds for object relatives more than for subject relatives, whereas a control experience group did not. The reading time data were compared to performance of a computational model given different amounts of experience. The results support claims for experience-based individual differences and an important role for statistical learning in sentence comprehension processes.
  • Whitehead, H., & Hersh, T. A. (2022). Posterior probabilities of membership of repertoires in acoustic clades. PLoS One, 17(4): e0267501. doi:10.1371/journal.pone.0267501.

    Abstract

    Recordings of calls may be used to assess population structure for acoustic species. This can be particularly effective if there are identity calls, produced nearly exclusively by just one population segment. The identity call method, IDcall, classifies calls into types using contaminated mixture models, and then clusters repertoires of calls into identity clades (potential population segments) using identity calls that are characteristic of the repertoires in each identity clade. We show how to calculate the Bayesian posterior probabilities that each repertoire is a member of each identity clade, and display this information as a stacked bar graph. This methodology (IDcallPP) is introduced using the output of IDcall but could easily be adapted to estimate posterior probabilities of clade membership when acoustic clades are delineated using other methods. This output is similar to that of the STRUCTURE software which uses molecular genetic data to assess population structure and has become a standard in conservation genetics. The technique introduced here should be a valuable asset to those who use acoustic data to address evolution, ecology, or conservation, and creates a methodological and conceptual bridge between geneticists and acousticians who aim to assess population structure.
  • Wierenga, L. M., Doucet, G. E., Dima, D., Agartz, I., Aghajani, M., Akudjedu, T. N., Albajes-Eizagirre, A., Alnæs, D., Alpert, K. I., Andreassen, O. A., Anticevic, A., Asherson, P., Banaschewski, T., Bargallo, N., Baumeister, S., Baur-Streubel, R., Bertolino, A., Bonvino, A., Boomsma, D. I., Borgwardt, S. and 139 moreWierenga, L. M., Doucet, G. E., Dima, D., Agartz, I., Aghajani, M., Akudjedu, T. N., Albajes-Eizagirre, A., Alnæs, D., Alpert, K. I., Andreassen, O. A., Anticevic, A., Asherson, P., Banaschewski, T., Bargallo, N., Baumeister, S., Baur-Streubel, R., Bertolino, A., Bonvino, A., Boomsma, D. I., Borgwardt, S., Bourque, J., Den Braber, A., Brandeis, D., Breier, A., Brodaty, H., Brouwer, R. M., Buitelaar, J. K., Busatto, G. F., Calhoun, V. D., Canales-Rodríguez, E. J., Cannon, D. M., Caseras, X., Castellanos, F. X., Chaim-Avancini, T. M., Ching, C. R. K., Clark, V. P., Conrod, P. J., Conzelmann, A., Crivello, F., Davey, C. G., Dickie, E. W., Ehrlich, S., Van 't Ent, D., Fisher, S. E., Fouche, J.-P., Franke, B., Fuentes-Claramonte, P., De Geus, E. J. C., Di Giorgio, A., Glahn, D. C., Gotlib, I. H., Grabe, H. J., Gruber, O., Gruner, P., Gur, R. E., Gur, R. C., Gurholt, T. P., De Haan, L., Haatveit, B., Harrison, B. J., Hartman, C. A., Hatton, S. N., Heslenfeld, D. J., Van den Heuvel, O. A., Hickie, I. B., Hoekstra, P. J., Hohmann, S., Holmes, A. J., Hoogman, M., Hosten, N., Howells, F. M., Hulshoff Pol, H. E., Huyser, C., Jahanshad, N., James, A. C., Jiang, J., Jönsson, E. G., Joska, J. A., Kalnin, A. J., Karolinska Schizophrenia Project (KaSP) Consortium, Klein, M., Koenders, L., Kolskår, K. K., Krämer, B., Kuntsi, J., Lagopoulos, J., Lazaro, L., Lebedeva, I. S., Lee, P. H., Lochner, C., Machielsen, M. W. J., Maingault, S., Martin, N. G., Martínez-Zalacaín, I., Mataix-Cols, D., Mazoyer, B., McDonald, B. C., McDonald, C., McIntosh, A. M., McMahon, K. L., McPhilemy, G., Van der Meer, D., Menchón, J. M., Naaijen, J., Nyberg, L., Oosterlaan, J., Paloyelis, Y., Pauli, P., Pergola, G., Pomarol-Clotet, E., Portella, M. J., Radua, J., Reif, A., Richard, G., Roffman, J. L., Rosa, P. G. P., Sacchet, M. D., Sachdev, P. S., Salvador, R., Sarró, S., Satterthwaite, T. D., Saykin, A. J., Serpa, M. H., Sim, K., Simmons, A., Smoller, J. W., Sommer, I. E., Soriano-Mas, C., Stein, D. J., Strike, L. T., Szeszko, P. R., Temmingh, H. S., Thomopoulos, S. I., Tomyshev, A. S., Trollor, J. N., Uhlmann, A., Veer, I. M., Veltman, D. J., Voineskos, A., Völzke, H., Walter, H., Wang, L., Wang, Y., Weber, B., Wen, W., West, J. D., Westlye, L. T., Whalley, H. C., Williams, S. C. R., Wittfeld, K., Wolf, D. H., Wright, M. J., Yoncheva, Y. N., Zanetti, M. V., Ziegler, G. C., De Zubicaray, G. I., Thompson, P. M., Crone, E. A., Frangou, S., & Tamnes, C. K. (2022). Greater male than female variability in regional brain structure across the lifespan. Human Brain Mapping, 43(1), 470-499. doi:10.1002/hbm.25204.

    Abstract

    For many traits, males show greater variability than females, with possible implications for understanding sex differences in health and disease. Here, the ENIGMA (Enhancing Neuro Imaging Genetics through Meta‐Analysis) Consortium presents the largest‐ever mega‐analysis of sex differences in variability of brain structure, based on international data spanning nine decades of life. Subcortical volumes, cortical surface area and cortical thickness were assessed in MRI data of 16,683 healthy individuals 1‐90 years old (47% females). We observed significant patterns of greater male than female between‐subject variance for all subcortical volumetric measures, all cortical surface area measures, and 60% of cortical thickness measures. This pattern was stable across the lifespan for 50% of the subcortical structures, 70% of the regional area measures, and nearly all regions for thickness. Our findings that these sex differences are present in childhood implicate early life genetic or gene‐environment interaction mechanisms. The findings highlight the importance of individual differences within the sexes, that may underpin sex‐specific vulnerability to disorders.
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2009). Body-specific motor imagery of hand actions: Neural evidence from right- and left-handers. Frontiers in Human Neuroscience, 3: 39, pp. 39. doi:10.3389/neuro.09.039.2009.

    Abstract

    If motor imagery uses neural structures involved in action execution, then the neural correlates of imagining an action should differ between individuals who tend to execute the action differently. Here we report fMRI data showing that motor imagery is influenced by the way people habitually perform motor actions with their particular bodies; that is, motor imagery is ‘body-specific’ (Casasanto, 2009). During mental imagery for complex hand actions, activation of cortical areas involved in motor planning and execution was left-lateralized in right-handers but right-lateralized in left-handers. We conclude that motor imagery involves the generation of an action plan that is grounded in the participant’s motor habits, not just an abstract representation at the level of the action’s goal. People with different patterns of motor experience form correspondingly different neurocognitive representations of imagined actions.
  • Willems, R. M., & Hagoort, P. (2009). Broca's region: Battles are not won by ignoring half of the facts. Trends in Cognitive Sciences, 13(3), 101. doi:10.1016/j.tics.2008.12.001.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2009). Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language. Neuroimage, 47, 1992-2004. doi:10.1016/j.neuroimage.2009.05.066.

    Abstract

    Several studies indicate that both posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG) and left inferior frontal gyrus (LIFG) are involved in integrating information from different modalities. Here we investigated the respective roles of these two areas in integration of action and language information. We exploited the fact that the semantic relationship between language and different forms of action (i.e. co-speech gestures and pantomimes) is radically different. Speech and co-speech gestures are always produced together, and gestures are not unambiguously understood without speech. On the contrary, pantomimes are not necessarily produced together with speech and can be easily understood without speech. We presented speech together with these two types of communicative hand actions in matching or mismatching combinations to manipulate semantic integration load. Left and right pSTS/MTG were only involved in semantic integration of speech and pantomimes. Left IFG on the other hand was involved in integration of speech and co-speech gestures as well as of speech and pantomimes. Effective connectivity analyses showed that depending upon the semantic relationship between language and action, LIFG modulates activation levels in left pSTS.

    This suggests that integration in pSTS/MTG involves the matching of two input streams for which there is a relatively stable common object representation, whereas integration in LIFG is better characterized as the on-line construction of a new and unified representation of the input streams. In conclusion, pSTS/MTG and LIFG are differentially involved in multimodal integration, crucially depending upon the semantic relationship between the input streams.

    Additional information

    Supplementary table S1
  • Willems, R. M., & Hagoort, P. (2009). Hand preference influences neural correlates of action observation. Brain Research, 1269, 90-104. doi:10.1016/j.brainres.2009.02.057.

    Abstract

    It has been argued that we map observed actions onto our own motor system. Here we added to this issue by investigating whether hand preference influences the neural correlates of action observation of simple, essentially meaningless hand actions. Such an influence would argue for an intricate neural coupling between action production and action observation, which goes beyond effects of motor repertoire or explicit motor training, as has been suggested before. Indeed, parts of the human motor system exhibited a close coupling between action production and action observation. Ventral premotor and inferior and superior parietal cortices showed differential activation for left- and right-handers that was similar during action production as well as during action observation. This suggests that mapping observed actions onto the observer's own motor system is a core feature of action observation - at least for actions that do not have a clear goal or meaning. Basic differences in the way we act upon the world are not only reflected in neural correlates of action production, but can also influence the brain basis of action observation.
  • Willems, R. M. (2009). Neural reflections of meaning in gesture, language, and action. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Wilms, V., Drijvers, L., & Brouwer, S. (2022). The Effects of Iconic Gestures and Babble Language on Word Intelligibility in Sentence Context. Journal of Speech, Language, and Hearing Research, 65, 1822-1838. doi:10.1044/2022\_JSLHR-21-00387.

    Abstract

    Purpose:This study investigated to what extent iconic co-speech gestures helpword intelligibility in sentence context in two different linguistic maskers (nativevs. foreign). It was hypothesized that sentence recognition improves with thepresence of iconic co-speech gestures and with foreign compared to nativebabble.Method:Thirty-two native Dutch participants performed a Dutch word recogni-tion task in context in which they were presented with videos in which anactress uttered short Dutch sentences (e.g.,Ze begint te openen,“She starts toopen”). Participants were presented with a total of six audiovisual conditions: nobackground noise (i.e., clear condition) without gesture, no background noise withgesture, French babble without gesture, French babble with gesture, Dutch bab-ble without gesture, and Dutch babble with gesture; and they were asked to typedown what was said by the Dutch actress. The accurate identification of theaction verbs at the end of the target sentences was measured.Results:The results demonstrated that performance on the task was better inthe gesture compared to the nongesture conditions (i.e., gesture enhancementeffect). In addition, performance was better in French babble than in Dutchbabble.Conclusions:Listeners benefit from iconic co-speech gestures during commu-nication and from foreign background speech compared to native. Theseinsights into multimodal communication may be valuable to everyone whoengages in multimodal communication and especially to a public who oftenworks in public places where competing speech is present in the background.
  • Wirthlin, M., Chang, E. F., Knörnschild, M., Krubitzer, L. A., Mello, C. V., Miller, C. T., Pfenning, A. R., Vernes, S. C., Tchernichovski, O., & Yartsev, M. M. (2019). A modular approach to vocal learning: Disentangling the diversity of a complex behavioral trait. Neuron, 104(1), 87-99. doi:10.1016/j.neuron.2019.09.036.

    Abstract

    Vocal learning is a behavioral trait in which the social and acoustic environment shapes the vocal repertoire of individuals. Over the past century, the study of vocal learning has progressed at the intersection of ecology, physiology, neuroscience, molecular biology, genomics, and evolution. Yet, despite the complexity of this trait, vocal learning is frequently described as a binary trait, with species being classified as either vocal learners or vocal non-learners. As a result, studies have largely focused on a handful of species for which strong evidence for vocal learning exists. Recent studies, however, suggest a continuum in vocal learning capacity across taxa. Here, we further suggest that vocal learning is a multi-component behavioral phenotype comprised of distinct yet interconnected modules. Discretizing the vocal learning phenotype into its constituent modules would facilitate integration of findings across a wider diversity of species, taking advantage of the ways in which each excels in a particular module, or in a specific combination of features. Such comparative studies can improve understanding of the mechanisms and evolutionary origins of vocal learning. We propose an initial set of vocal learning modules supported by behavioral and neurobiological data and highlight the need for diversifying the field in order to disentangle the complexity of the vocal learning phenotype.

    Files private

    Request files
  • Wittek, A. (1998). Learning verb meaning via adverbial modification: Change-of-state verbs in German and the adverb "wieder" again. In A. Greenhill, M. Hughes, H. Littlefield, & H. Walsh (Eds.), Proceedings of the 22nd Annual Boston University Conference on Language Development (pp. 779-790). Somerville, MA: Cascadilla Press.
  • Wnuk, E., Verkerk, A., Levinson, S. C., & Majid, A. (2022). Color technology is not necessary for rich and efficient color language. Cognition, 229: 105223. doi:10.1016/j.cognition.2022.105223.

    Abstract

    The evolution of basic color terms in language is claimed to be stimulated by technological development, involving technological control of color or exposure to artificially colored objects. Accordingly, technologically “simple” non-industrialized societies are expected to have poor lexicalization of color, i.e., only rudimentary lexica of 2, 3 or 4 basic color terms, with unnamed gaps in the color space. While it may indeed be the case that technology stimulates lexical growth of color terms, it is sometimes considered a sine qua non for color salience and lexicalization. We provide novel evidence that this overlooks the role of the natural environment, and people's engagement with the environment, in the evolution of color vocabulary. We introduce the Maniq—nomadic hunter-gatherers with no color technology, but who have a basic color lexicon of 6 or 7 terms, thus of the same order as large languages like Vietnamese and Hausa, and who routinely talk about color. We examine color language in Maniq and compare it to available data in other languages to demonstrate it has remarkably high consensual color term usage, on a par with English, and high coding efficiency. This shows colors can matter even for non-industrialized societies, suggesting technology is not necessary for color language. Instead, factors such as perceptual prominence of color in natural environments, its practical usefulness across communicative contexts, and symbolic importance can all stimulate elaboration of color language.
  • Woensdregt, M., Jara-Ettinger, J., & Rubio-Fernandez, P. (2022). Language universals rely on social cognition: Computational models of the use of this and that to redirect the receiver’s attention. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (Eds.), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 1382-1388). Toronto, Canada: Cognitive Science Society.

    Abstract

    Demonstratives—simple referential devices like this and that—are linguistic universals, but their meaning varies cross-linguistically. In languages like English and Italian, demonstratives are thought to encode the referent’s distance from the producer (e.g., that one means “the one far away from me”),
    while in others, like Portuguese and Spanish, they encode relative distance from both producer and receiver (e.g., aquel means “the one far away from both of us”). Here we propose that demonstratives are also sensitive to the receiver’s focus of attention, hence requiring a deeper form of social cognition
    than previously thought. We provide initial empirical and computational evidence for this idea, suggesting that producers use
    demonstratives to redirect the receiver’s attention towards the intended referent, rather than only to indicate its physical distance.
  • Wolf, M. C. (2022). Spoken and written word processing: Effects of presentation modality and individual differences in experience to written language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 1212-1218). Montreal, QB: Cognitive Science Society.

    Abstract

    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials.
  • Wolf, M. C., Muijselaar, M. M. L., Boonstra, A. M., & De Bree, E. H. (2019). The relationship between reading and listening comprehension: Shared and modality-specific components. Reading and Writing, 32(7), 1747-1767. doi:10.1007/s11145-018-9924-8.

    Abstract

    This study aimed to increase our understanding on the relationship between reading and listening comprehension. Both in comprehension theory and in educational practice, reading and listening comprehension are often seen as interchangeable, overlooking modality-specific aspects of them separately. Three questions were addressed. First, it was examined to what extent reading and listening comprehension comprise modality-specific, distinct skills or an overlapping, domain-general skill in terms of the amount of explained variance in one comprehension type by the opposite comprehension type. Second, general and modality-unique subskills of reading and listening comprehension were sought by assessing the contributions of the foundational skills word reading fluency, vocabulary, memory, attention, and inhibition to both comprehension types. Lastly, the practice of using either listening comprehension or vocabulary as a proxy of general comprehension was investigated. Reading and listening comprehension tasks with the same format were assessed in 85 second and third grade children. Analyses revealed that reading comprehension explained 34% of the variance in listening comprehension, and listening comprehension 40% of reading comprehension. Vocabulary and word reading fluency were found to be shared contributors to both reading and listening comprehension. None of the other cognitive skills contributed significantly to reading or listening comprehension. These results indicate that only part of the comprehension process is indeed domain-general and not influenced by the modality in which the information is provided. Especially vocabulary seems to play a large role in this domain-general part. The findings warrant a more prominent focus of modality-specific aspects of both reading and listening comprehension in research and education.
  • Won, S.-O., Hu, I., Kim, M.-Y., Bae, J.-M., Kim, Y.-M., & Byun, K.-S. (2009). Theory and practice of Sign Language interpretation. Pyeongtaek: Korea National College of Rehabilitation & Welfare.
  • Wood, N. (2009). Field recording for dummies. In A. Majid (Ed.), Field manual volume 12 (pp. V). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Xiao, M., Kong, X., Liu, J., & Ning, J. (2009). TMBF: Bloom filter algorithms of time-dependent multi bit-strings for incremental set. In Proceedings of the 2009 International Conference on Ultra Modern Telecommunications & Workshops.

    Abstract

    Set is widely used as a kind of basic data structure. However, when it is used for large scale data set the cost of storage, search and transport is overhead. The bloom filter uses a fixed size bit string to represent elements in a static set, which can reduce storage space and search cost that is a fixed constant. The time-space efficiency is achieved at the cost of a small probability of false positive in membership query. However, for many applications the space savings and locating time constantly outweigh this drawback. Dynamic bloom filter (DBF) can support concisely representation and approximate membership queries of dynamic set instead of static set. It has been proved that DBF not only possess the advantage of standard bloom filter, but also has better features when dealing with dynamic set. This paper proposes a time-dependent multiple bit-strings bloom filter (TMBF) which roots in the DBF and targets on dynamic incremental set. TMBF uses multiple bit-strings in time order to present a dynamic increasing set and uses backward searching to test whether an element is in a set. Based on the system logs from a real P2P file sharing system, the evaluation shows a 20% reduction in searching cost compared to DBF.
  • Yang, J. (2022). Discovering the units in language cognition: From empirical evidence to a computational model. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Yang, J., Van den Bosch, A., & Frank, S. L. (2022). Unsupervised text segmentation predicts eye fixations during reading. Frontiers in Artificial Intelligence, 5: 731615. doi:10.3389/frai.2022.731615.

    Abstract

    Words typically form the basis of psycholinguistic and computational linguistic studies about sentence processing. However, recent evidence shows the basic units during reading, i.e., the items in the mental lexicon, are not always words, but could also be sub-word and supra-word units. To recognize these units, human readers require a cognitive mechanism to learn and detect them. In this paper, we assume eye fixations during reading reveal the locations of the cognitive units, and that the cognitive units are analogous with the text units discovered by unsupervised segmentation models. We predict eye fixations by model-segmented units on both English and Dutch text. The results show the model-segmented units predict eye fixations better than word units. This finding suggests that the predictive performance of model-segmented units indicates their plausibility as cognitive units. The Less-is-Better (LiB) model, which finds the units that minimize both long-term and working memory load, offers advantages both in terms of prediction score and efficiency among alternative models. Our results also suggest that modeling the least-effort principle for the management of long-term and working memory can lead to inferring cognitive units. Overall, the study supports the theory that the mental lexicon stores not only words but also smaller and larger units, suggests that fixation locations during reading depend on these units, and shows that unsupervised segmentation models can discover these units.
  • Zavala, R. (2000). Multiple classifier systems in Akatek (Mayan). In G. Senft (Ed.), Systems of nominal classification (pp. 114-146). Cambridge University Press.
  • Zeller, J., Bylund, E., & Lewis, A. G. (2022). The parser consults the lexicon in spite of transparent gender marking: EEG evidence from noun class agreement processing in Zulu. Cognition, 226: 105148. doi:10.1016/j.cognition.2022.105148.

    Abstract

    In sentence comprehension, the parser in many languages has the option to use both the morphological form of a noun and its lexical representation when evaluating agreement. The additional step of consulting the lexicon incurs processing costs, and an important question is whether the parser takes that step even when the formal cues alone are sufficiently reliable to evaluate agreement. Our study addressed this question using electrophysiology in Zulu, a language where both grammatical gender and number features are reliably expressed formally by noun class prefixes, but only gender features are lexically specified. We observed reduced, more topographically focal LAN, and more frontally distributed alpha/beta power effects for gender compared to number agreement violations. These differences provide evidence that for gender mismatches, even though the formal cues are reliable, the parser nevertheless takes the additional step of consulting the noun's lexical representation, a step which is not available for number.

    Files private

    Request files
  • Zhang, Q., Zhou, Y., & Lou, H. (2022). The dissociation between age of acquisition and word frequency effects in Chinese spoken picture naming. Psychological Research, 86, 1918-1929. doi:10.1007/s00426-021-01616-0.

    Abstract

    This study aimed to examine the locus of age of acquisition (AoA) and word frequency (WF) effects in Chinese spoken picture naming, using a picture–word interference task. We conducted four experiments manipulating the properties of picture names (AoA in Experiments 1 and 2, while controlling WF; and WF in Experiments 3 and 4, while controlling AoA), and the relations between distractors and targets (semantic or phonological relatedness). Both Experiments 1 and 2 demonstrated AoA effects in picture naming; pictures of early acquired concepts were named faster than those acquired later. There was an interaction between AoA and semantic relatedness, but not between AoA and phonological relatedness, suggesting localisation of AoA effects at the stage of lexical access in picture naming. Experiments 3 and 4 demonstrated WF effects: pictures of high-frequency concepts were named faster than those of low-frequency concepts. WF interacted with both phonological and semantic relatedness, suggesting localisation of WF effects at multiple levels of picture naming, including lexical access and phonological encoding. Our findings show that AoA and WF effects exist in Chinese spoken word production and may arise at related processes of lexical selection.
  • Zhang, Y., & Yu, C. (2022). Examining real-time attention dynamics in parent-infant picture book reading. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (Eds.), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 1367-1374). Toronto, Canada: Cognitive Science Society.

    Abstract

    Picture book reading is a common word-learning context from which parents repeatedly name objects to their child and it has been found to facilitate early word learning. To learn the correct word-object mappings in a book-reading context, infants need to be able to link what they see with what they hear. However, given multiple objects on every book page, it is not clear how infants direct their attention to objects named by parents. The aim of the current study is to examine how infants mechanistically discover the correct word-object mappings during book reading in real time. We used head-mounted eye-tracking during parent-infant picture book reading and measured the infant's moment-by-moment visual attention to the named referent. We also examined how gesture cues provided by both the child and the parent may influence infants' attention to the named target. We found that although parents provided many object labels during book reading, infants were not able to attend to the named objects easily. However, their abilities to follow and use gestures to direct the other social partner’s attention increase the chance of looking at the named target during parent naming.
  • Zhang, Y., Chen, C.-h., & Yu, C. (2019). Mechanisms of cross-situational learning: Behavioral and computational evidence. In Advances in Child Development and Behavior; vol. 56 (pp. 37-63).

    Abstract

    Word learning happens in everyday contexts with many words and many potential referents for those words in view at the same time. It is challenging for young learners to find the correct referent upon hearing an unknown word at the moment. This problem of referential uncertainty has been deemed as the crux of early word learning (Quine, 1960). Recent empirical and computational studies have found support for a statistical solution to the problem termed cross-situational learning. Cross-situational learning allows learners to acquire word meanings across multiple exposures, despite each individual exposure is referentially uncertain. Recent empirical research shows that infants, children and adults rely on cross-situational learning to learn new words (Smith & Yu, 2008; Suanda, Mugwanya, & Namy, 2014; Yu & Smith, 2007). However, researchers have found evidence supporting two very different theoretical accounts of learning mechanisms: Hypothesis Testing (Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005; Markman, 1992) and Associative Learning (Frank, Goodman, & Tenenbaum, 2009; Yu & Smith, 2007). Hypothesis Testing is generally characterized as a form of learning in which a coherent hypothesis regarding a specific word-object mapping is formed often in conceptually constrained ways. The hypothesis will then be either accepted or rejected with additional evidence. However, proponents of the Associative Learning framework often characterize learning as aggregating information over time through implicit associative mechanisms. A learner acquires the meaning of a word when the association between the word and the referent becomes relatively strong. In this chapter, we consider these two psychological theories in the context of cross-situational word-referent learning. By reviewing recent empirical and cognitive modeling studies, our goal is to deepen our understanding of the underlying word learning mechanisms by examining and comparing the two theoretical learning accounts.
  • Wu, S., Zhang, D., Li, X., Zhao, J., Sun, X., Shi, L., Mao, Y., Zhang, Y., & Jiang, F. (2022). Siblings and Early Childhood Development: Evidence from a Population-Based Cohort in Preschoolers from Shanghai. International Journal of Environmental Research and Public Health, 19(9): 5739. doi:10.3390/ijerph19095739.

    Abstract

    Background: The current study aims to investigate the association between the presence of a sibling and early childhood development (ECD). (2) Methods: Data were obtained from a large-scale population-based cohort in Shanghai. Children were followed from three to six years old. Based on birth order, the sample was divided into four groups: single child, younger child, elder child, and single-elder transfer (transfer from single-child to elder-child). Psychosocial well-being and school readiness were assessed with the total difficulties score from the Strengths and Difficulties Questionnaire (SDQ) and the overall development score from the early Human Capability Index (eHCI), respectively. A multilevel model was conducted to evaluate the main effect of each sibling group and the group × age interaction effect on psychosocial well-being and school readiness. (3) Results: Across all measures, children in the younger child group presented with lower psychosocial problems (β = −0.96, 95% CI: −1.44, −0.48, p < 0.001) and higher school readiness scores (β = 1.56, 95% CI: 0.61, 2.51, p = 0.001). No significant difference, or marginally significant difference, was found between the elder group and the single-child group. Compared to the single-child group, the single-elder transfer group presented with slower development on both psychosocial well-being (Age × Group: β = 0.37, 95% CI: 0.18, 0.56, p < 0.001) and school readiness (Age × Group: β = −0.75, 95% CI: −1.10, −0.40, p < 0.001). The sibling-ECD effects did not differ between children from families of low versus high socioeconomic status. (4) Conclusion: The current study suggested the presence of a sibling was not associated with worse development outcomes in general. Rather, children with an elder sibling are more likely to present with better ECD.
  • Zhao, J., Yu, Z., Sun, X., Wu, S., Zhang, J., Zhang, D., Zhang, Y., & Jiang, F. (2022). Association between screen time trajectory and early childhood development in children in China. JAMA Pediatrics, 176(8), 768-775. doi:10.1001/jamapediatrics.2022.1630.

    Abstract

    Importance: Screen time has become an integral part of children's daily lives. Nevertheless, the developmental consequences of screen exposure in young children remain unclear.

    Objective: To investigate the screen time trajectory from 6 to 72 months of age and its association with children's development at age 72 months in a prospective birth cohort.

    Design, setting, and participants: Women in Shanghai, China, who were at 34 to 36 gestational weeks and had an expected delivery date between May 2012 and July 2013 were recruited for this cohort study. Their children were followed up at 6, 9, 12, 18, 24, 36, 48, and 72 months of age. Children's screen time was classified into 3 groups at age 6 months: continued low (ie, stable amount of screen time), late increasing (ie, sharp increase in screen time at age 36 months), and early increasing (ie, large amount of screen time in early stages that remained stable after age 36 months). Cognitive development was assessed by specially trained research staff in a research clinic. Of 262 eligible mother-offspring pairs, 152 dyads had complete data regarding all variables of interest and were included in the analyses. Data were analyzed from September 2019 to November 2021.

    Exposures: Mothers reported screen times of children at 6, 9, 12, 18, 24, 36, 48, and 72 months of age.

    Main outcomes and measures: The cognitive development of children was evaluated using the Wechsler Intelligence Scale for Children, 4th edition, at age 72 months. Social-emotional development was measured by the Strengths and Difficulties Questionnaire, which was completed by the child's mother. The study described demographic characteristics, maternal mental health, child's temperament at age 6 months, and mental development at age 12 months by subgroups clustered by a group-based trajectory model. Group difference was examined by analysis of variance.

    Results: A total of 152 mother-offspring dyads were included in this study, including 77 girls (50.7%) and 75 boys (49.3%) (mean [SD] age of the mothers was 29.7 [3.3] years). Children's screen time trajectory from age 6 to 72 months was classified into 3 groups: continued low (110 [72.4%]), late increasing (17 [11.2%]), and early increasing (25 [16.4%]). Compared with the continued low group, the late increasing group had lower scores on the Full-Scale Intelligence Quotient (β coefficient, -8.23; 95% CI, -15.16 to -1.30; P < .05) and the General Ability Index (β coefficient, -6.42; 95% CI, -13.70 to 0.86; P = .08); the early increasing group presented with lower scores on the Full-Scale Intelligence Quotient (β coefficient, -6.68; 95% CI, -12.35 to -1.02; P < .05) and the Cognitive Proficiency Index (β coefficient, -10.56; 95% CI, -17.23 to -3.90; P < .01) and a higher total difficulties score (β coefficient, 2.62; 95% CI, 0.49-4.76; P < .05).

    Conclusions and relevance: This cohort study found that excessive screen time in early years was associated with poor cognitive and social-emotional development. This finding may be helpful in encouraging awareness among parents of the importance of onset and duration of children's screen time.
  • Zheng, X., & Lemhöfer, K. (2019). The “semantic P600” in second language processing: When syntax conflicts with semantics. Neuropsychologia, 127, 131-147. doi:10.1016/j.neuropsychologia.2019.02.010.

    Abstract

    In sentences like “the mouse that chased the cat was hungry”, the syntactically correct interpretation (the mouse chases the cat) is contradicted by semantic and pragmatic knowledge. Previous research has shown that L1 speakers sometimes base sentence interpretation on this type of knowledge (so-called “shallow” or “good-enough” processing). We made use of both behavioural and ERP measurements to investigate whether L2 learners differ from native speakers in the extent to which they engage in “shallow” syntactic processing. German learners of Dutch as well as Dutch native speakers read sentences containing relative clauses (as in the example above) for which the plausible thematic roles were or were not reversed, and made plausibility judgments. The results show that behaviourally, L2 learners had more difficulties than native speakers to discriminate plausible from implausible sentences. In the ERPs, we replicated the previously reported finding of a “semantic P600” for semantic reversal anomalies in native speakers, probably reflecting the effort to resolve the syntax-semantics conflict. In L2 learners, though, this P600 was largely attenuated and surfaced only in those trials that were judged correctly for plausibility. These results generally point at a more prevalent, but not exclusive occurrence of shallow syntactic processing in L2 learners.
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Zimianiti, E. (2022). Is semantic memory the winning component in second language teaching with Accelerative Integrated Method (AIM)? LingUU Journal, 6(1), 54-62.

    Abstract

    This paper constitutes a research proposal based on Rousse-Malpalt’s
    (2019) dissertation, which extensively examines the effectiveness of the
    Accelerative Integrated Method (AIM) in second language (L2) learning.
    Although it has been found that AIM is a greatly effective method in comparison with non-implicit teaching methods, the reasons behind its success and effectiveness are yet unknown. As Semantic Memory (SM) is the component of memory responsible for the conceptualization and storage of knowledge, this paper sets to propose an investigation of its role in the learning process of AIM and provide with insights as to why the embodied experience of learning with AIM is more effective than others. The tasks proposed for administration take into account the factors of gestures being related to a learner’s memorization process and Semantic Memory. Lastly, this paper provides with a future research idea about the learning mechanisms of sign languages in people with hearing deficits and healthy population, aiming to indicate which brain mechanisms benefit from the teaching method of AIM and reveal important brain functions for SLA via AIM.
  • Zora, H., Riad, T., & Ylinen, S. (2019). Prosodically controlled derivations in the mental lexicon. Journal of Neurolinguistics, 52: 100856. doi:10.1016/j.jneuroling.2019.100856.

    Abstract

    Swedish morphemes are classified as prosodically specified or prosodically unspecified, depending on lexical or phonological stress, respectively. Here, we investigate the allomorphy of the suffix -(i)sk, which indicates the distinction between lexical and phonological stress; if attached to a lexically stressed morpheme, it takes a non-syllabic form (-sk), whereas if attached to a phonologically stressed morpheme, an epenthetic vowel is inserted (-isk). Using mismatch negativity (MMN), we explored the neural processing of this allomorphy across lexically stressed and phonologically stressed morphemes. In an oddball paradigm, participants were occasionally presented with congruent and incongruent derivations, created by the suffix -(i)sk, within the repetitive presentation of their monomorphemic stems. The results indicated that the congruent derivation of the lexically stressed stem elicited a larger MMN than the incongruent sequences of the same stem and the derivational suffix, whereas after the phonologically stressed stem a non-significant tendency towards an opposite pattern was observed. We argue that the significant MMN response to the congruent derivation in the lexical stress condition is in line with lexical MMN, indicating a holistic processing of the sequence of lexically stressed stem and derivational suffix. The enhanced MMN response to the incongruent derivation in the phonological stress condition, on the other hand, is suggested to reflect combinatorial processing of the sequence of phonologically stressed stem and derivational suffix. These findings bring a new aspect to the dual-system approach to neural processing of morphologically complex words, namely the specification of word stress.
  • Zora, H., Gussenhoven, C., Tremblay, A., & Liu, F. (2022). Editorial: Crosstalk between intonation and lexical tones: Linguistic, cognitive and neuroscience perspectives. Frontiers in Psychology, 13: 1101499. doi:10.3389/fpsyg.2022.1101499.

    Abstract

    The interplay between categorical and continuous aspects of the speech signal remains central and yet controversial in the fields of phonetics and phonology. The division between phonological abstractions and phonetic variations has been particularly relevant to the unraveling of diverse communicative functions of pitch in the domain of prosody. Pitch influences vocal communication in two major but fundamentally different ways, and lexical and intonational tones exquisitely capture these functions. Lexical tone contrasts convey lexical meanings as well as derivational meanings at the word level and are grammatically encoded as discrete structures. Intonational tones, on the other hand, signal post-lexical meanings at the phrasal level and typically allow gradient pragmatic variations. Since categorical and gradient uses of pitch are ubiquitous and closely intertwined in their physiological and psychological processes, further research is warranted for a more detailed understanding of their structural and functional characterisations. This Research Topic addresses this matter from a wide range of perspectives, including first and second language acquisition, speech production and perception, structural and functional diversity, and working with distinct languages and experimental measures. In the following, we provide a short overview of the contributions submitted to this topic

    Additional information

    also published as book chapter (2023)
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Slow naming of pictures facilitates memory for their names. Psychonomic Bulletin & Review, 26(5), 1675-1682. doi:10.3758/s13423-019-01620-x.

    Abstract

    Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language.
  • Zormpa, E., Brehm, L., Hoedemaker, R. S., & Meyer, A. S. (2019). The production effect and the generation effect improve memory in picture naming. Memory, 27(3), 340-352. doi:10.1080/09658211.2018.1510966.

    Abstract

    The production effect (better memory for words read aloud than words read silently) and the picture superiority effect (better memory for pictures than words) both improve item memory in a picture naming task (Fawcett, J. M., Quinlan, C. K., & Taylor, T. L. (2012). Interplay of the production and picture superiority effects: A signal detection analysis. Memory (Hove, England), 20(7), 655–666. doi:10.1080/09658211.2012.693510). Because picture naming requires coming up with an appropriate label, the generation effect (better memory for generated than read words) may contribute to the latter effect. In two forced-choice memory experiments, we tested the role of generation in a picture naming task on later recognition memory. In Experiment 1, participants named pictures silently or aloud with the correct name or an unreadable label superimposed. We observed a generation effect, a production effect, and an interaction between the two. In Experiment 2, unreliable labels were included to ensure full picture processing in all conditions. In this experiment, we observed a production and a generation effect but no interaction, implying the effects are dissociable. This research demonstrates the separable roles of generation and production in picture naming and their impact on memory. As such, it informs the link between memory and language production and has implications for memory asymmetries between language production and comprehension.

    Additional information

    pmem_a_1510966_sm9257.pdf
  • Zuidema, W., & Fitz, H. (2019). Key issues and future directions: Models of human language and speech processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 353-358). Cambridge, MA: MIT Press.
  • Zwitserlood, I. (2009). Het Corpus NGT. Levende Talen Magazine, 6, 44-45.

    Abstract

    The Corpus NGT
  • Zwitserlood, I. (2009). Het Corpus NGT en de dagelijkse lespraktijk (1). Levende Talen Magazine, 8, 40-41.

Share this page