Publications

Displaying 1101 - 1146 of 1146
  • Warner, N., Fountain, A., & Tucker, B. V. (2009). Cues to perception of reduced flaps. Journal of the Acoustical Society of America, 125(5), 3317-3327. doi:10.1121/1.3097773.

    Abstract

    Natural, spontaneous speech (and even quite careful speech) often shows extreme reduction in many speech segments, even resulting in apparent deletion of consonants. Where the flap ([(sic)]) allophone of /t/ and /d/ is expected in American English, one frequently sees an approximant-like or even vocalic pattern, rather than a clear flap. Still, the /t/ or /d/ is usually perceived, suggesting the acoustic characteristics of a reduced flap are sufficient for perception of a consonant. This paper identifies several acoustic characteristics of reduced flaps based on previous acoustic research (size of intensity dip, consonant duration, and F4 valley) and presents phonetic identification data for continua that manipulate these acoustic characteristics of reduction. The results indicate that the most obvious types of acoustic variability seen in natural flaps do affect listeners' percept of a consonant, but not sufficiently to completely account for the percept. Listeners are affected by the acoustic characteristics of consonant reduction, but they are also very skilled at evaluating variability along the acoustic dimensions that realize reduction.

    Files private

    Request files
  • Warner, N., Luna, Q., Butler, L., & Van Volkinburg, H. (2009). Revitalization in a scattered language community: Problems and methods from the perspective of Mutsun language revitalization. International Journal of the Sociology of Language, 198, 135-148. doi:10.1515/IJSL.2009.031.

    Abstract

    This article addresses revitalization of a dormant language whose prospective speakers live in scattered geographical areas. In comparison to increasing the usage of an endangered language, revitalizing a dormant language (one with no living speakers) requires different methods to gain knowledge of the language. Language teaching for a dormant language with a scattered community presents different problems from other teaching situations. In this article, we discuss the types of tasks that must be accomplished for dormant-language revitalization, with particular focus on development of teaching materials. We also address the role of computer technologies, arguing that each use of technology should be evaluated for how effectively it increases fluency. We discuss methods for achieving semi-fluency for the first new speakers of a dormant language, and for spreading the language through the community.
  • Watson, L. M., Wong, M. M. K., Vowles, J., Cowley, S. A., & Becker, E. B. E. (2018). A simplified method for generating purkinje cells from human-induced pluripotent stem cells. The Cerebellum, 17(4), 419-427. doi:10.1007/s12311-017-0913-2.

    Abstract

    The establishment of a reliable model for the study of Purkinje cells in vitro is of particular importance, given their central role in cerebellar function and pathology. Recent advances in induced pluripotent stem cell (iPSC) technology offer the opportunity to generate multiple neuronal subtypes for study in vitro. However, to date, only a handful of studies have generated Purkinje cells from human pluripotent stem cells, with most of these protocols proving challenging to reproduce. Here, we describe a simplified method for the reproducible generation of Purkinje cells from human iPSCs. After 21 days of treatment with factors selected to mimic the self-inductive properties of the isthmic organiser—insulin, fibroblast growth factor 2 (FGF2), and the transforming growth factor β (TGFβ)-receptor blocker SB431542—hiPSCs could be induced to form En1-positive cerebellar progenitors at efficiencies of up to 90%. By day 35 of differentiation, subpopulations of cells representative of the two cerebellar germinal zones, the rhombic lip (Atoh1-positive) and ventricular zone (Ptf1a-positive), could be identified, with the latter giving rise to cells positive for Purkinje cell progenitor-specific markers, including Lhx5, Kirrel2, Olig2 and Skor2. Further maturation was observed following dissociation and co-culture of these cerebellar progenitors with mouse cerebellar cells, with 10% of human cells staining positive for the Purkinje cell marker calbindin by day 70 of differentiation. This protocol, which incorporates modifications designed to enhance cell survival and maturation and improve the ease of handling, should serve to make existing models more accessible, in order to enable future advances in the field.

    Additional information

    12311_2017_913_MOESM1_ESM.docx
  • Waymel, A., Friedrich, P., Bastian, P.-A., Forkel, S. J., & Thiebaut de Schotten, M. (2020). Anchoring the human olfactory system within a functional gradient. NeuroImage, 216: 116863. doi:10.1016/j.neuroimage.2020.116863.

    Abstract

    Margulies et al. (2016) demonstrated the existence of at least five independent functional connectivity gradients in the human brain. However, it is unclear how these functional gradients might link to anatomy. The dual origin theory proposes that differences in cortical cytoarchitecture originate from two trends of progressive differentiation between the different layers of the cortex, referred to as the hippocampocentric and olfactocentric systems. When conceptualising the functional connectivity gradients within the evolutionary framework of the Dual Origin theory, the first gradient likely represents the hippocampocentric system anatomically. Here we expand on this concept and demonstrate that the fifth gradient likely links to the olfactocentric system. We describe the anatomy of the latter as well as the evidence to support this hypothesis. Together, the first and fifth gradients might help to model the Dual Origin theory of the human brain and inform brain models and pathologies.
  • Weber, K., & Indefrey, P. (2009). Syntactic priming in German–English bilinguals during sentence comprehension. Neuroimage, 46, 1164-1172. doi:10.1016/j.neuroimage.2009.03.040.

    Abstract

    A longstanding question in bilingualism is whether syntactic information is shared between the two language processing systems. We used an fMRI repetition suppression paradigm to investigate syntactic priming in reading comprehension in German–English late-acquisition bilinguals. In comparison to conventional subtraction analyses in bilingual experiments, repetition suppression has the advantage of being able to detect neuronal populations that are sensitive to properties that are shared by consecutive stimuli. In this study, we manipulated the syntactic structure between prime and target sentences. A sentence with a passive sentence structure in English was preceded either by a passive or by an active sentence in English or German. We looked for repetition suppression effects in left inferior frontal, left precentral and left middle temporal regions of interest. These regions were defined by a contrast of all non-target sentences in German and English versus the baseline of sentence-format consonant strings. We found decreases in activity (repetition suppression effects) in these regions of interest following the repetition of syntactic structure from the first to the second language and within the second language.
    Moreover, a separate behavioural experiment using a word-by-word reading paradigm similar to the fMRI experiment showed faster reading times for primed compared to unprimed English target sentences regardless of whether they were preceded by an English or a German sentence of the same structure.
    We conclude that there is interaction between the language processing systems and that at least some syntactic information is shared between a bilingual's languages with similar syntactic structures.

    Files private

    Request files
  • Weber, A. (2009). The role of linguistic experience in lexical recognition [Abstract]. Journal of the Acoustical Society of America, 125, 2759.

    Abstract

    Lexical recognition is typically slower in L2 than in L1. Part of the difficulty comes from a not precise enough processing of L2 phonemes. Consequently, L2 listeners fail to eliminate candidate words that L1 listeners can exclude from competing for recognition. For instance, the inability to distinguish /r/ from /l/ in rocket and locker makes for Japanese listeners both words possible candidates when hearing their onset (e.g., Cutler, Weber, and Otake, 2006). The L2 disadvantage can, however, be dispelled: For L2 listeners, but not L1 listeners, L2 speech from a non-native talker with the same language background is known to be as intelligible as L2 speech from a native talker (e.g., Bent and Bradlow, 2003). A reason for this may be that L2 listeners have ample experience with segmental deviations that are characteristic for their own accent. On this account, only phonemic deviations that are typical for the listeners’ own accent will cause spurious lexical activation in L2 listening (e.g., English magic pronounced as megic for Dutch listeners). In this talk, I will present evidence from cross-modal priming studies with a variety of L2 listener groups, showing how the processing of phonemic deviations is accent-specific but withstands fine phonetic differences.
  • Weekes, B. S., Abutalebi, J., Mak, H.-K.-F., Borsa, V., Soares, S. M. P., Chiu, P. W., & Zhang, L. (2018). Effect of monolingualism and bilingualism in the anterior cingulate cortex: a proton magnetic resonance spectroscopy study in two centers. Letras de Hoje, 53(1), 5-12. doi:10.15448/1984-7726.2018.1.30954.

    Abstract

    Reports of an advantage of bilingualism on brain structure in young adult participants
    are inconsistent. Abutalebi et al. (2012) reported more efficient monitoring of conflict during the
    Flanker task in young bilinguals compared to young monolingual speakers. The present study
    compared young adult (mean age = 24) Cantonese-English bilinguals in Hong Kong and young
    adult monolingual speakers. We expected (a) differences in metabolites in neural tissue to result
    from bilingual experience, as measured by 1H-MRS at 3T, (b) correlations between metabolic
    levels and Flanker conflict and interference effects (c) different associations in bilingual and
    monolingual speakers. We found evidence of metabolic differences in the ACC due to bilingualism,
    specifically in metabolites Cho, Cr, Glx and NAA. However, we found no significant correlations
    between metabolic levels and conflict and interference effects and no significant evidence of
    differential relationships between bilingual and monolingual speakers. Furthermore, we found no
    evidence of significant differences in the mean size of conflict and interference effects between
    groups i.e. no bilingual advantage. Lower levels of Cho, Cr, Glx and NAA in bilingual adults
    compared to monolingual adults suggest that the brains of bilinguals develop greater adaptive
    control during conflict monitoring because of their extensive bilingual experience.
  • Weissbart, H., Kandylaki, K. D., & Reichenbach, T. (2020). Cortical tracking of surprisal during continuous speech comprehension. Journal of Cognitive Neuroscience, 32, 155-166. doi:10.1162/jocn_a_01467.

    Abstract

    Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as of the precision of such a prediction. However, they have focused on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension, a listener hears many successive words whose predictability and precision vary over a large range. Here, we show that cortical activity in different frequency bands tracks word surprisal in continuous natural speech and that this tracking is modulated by precision. We obtain these results through quantifying surprisal and precision from naturalistic speech using a deep neural network and through relating these speech features to EEG responses of human volunteers acquired during auditory story comprehension. We find significant cortical tracking of surprisal at low frequencies, including the delta band as well as in the higher frequency beta and gamma bands, and observe that the tracking is modulated by the precision. Our results pave the way to further investigate the neurobiology of natural speech comprehension.
  • Wells, J. B., Christiansen, M. H., Race, D. S., Acheson, D. J., & MacDonald, M. C. (2009). Experience and sentence processing: Statistical learning and relative clause comprehension. Cognitive Psychology, 58(2), 250-271. doi:10.1016/j.cogpsych.2008.08.002.

    Abstract

    Many explanations of the difficulties associated with interpreting object relative clauses appeal to the demands that object relatives make on working memory. MacDonald and Christiansen [MacDonald, M. C., & Christiansen, M. H. (2002). Reassessing working memory: Comment on Just and Carpenter (1992) and Waters and Caplan (1996). Psychological Review, 109, 35-54] pointed to variations in reading experience as a source of differences, arguing that the unique word order of object relatives makes their processing more difficult and more sensitive to the effects of previous experience than the processing of subject relatives. This hypothesis was tested in a large-scale study manipulating reading experiences of adults over several weeks. The group receiving relative clause experience increased reading speeds for object relatives more than for subject relatives, whereas a control experience group did not. The reading time data were compared to performance of a computational model given different amounts of experience. The results support claims for experience-based individual differences and an important role for statistical learning in sentence comprehension processes.
  • Whelan, L., Dockery, A., Stephenson, K. A. J., Zhu, J., Kopčić, E., Post, I. J. M., Khan, M., Corradi, Z., Wynne, N., O’ Byrne, J. J., Duignan, E., Silvestri, G., Roosing, S., Cremers, F. P. M., Keegan, D. J., Kenna, P. F., & Farrar, G. J. (2023). Detailed analysis of an enriched deep intronic ABCA4 variant in Irish Stargardt disease patients. Scientific Reports, 13: 9380. doi:10.1038/s41598-023-35889-9.

    Abstract

    Over 15% of probands in a large cohort of more than 1500 inherited retinal degeneration patients present with a clinical diagnosis of Stargardt disease (STGD1), a recessive form of macular dystrophy caused by biallelic variants in the ABCA4 gene. Participants were clinically examined and underwent either target capture sequencing of the exons and some pathogenic intronic regions of ABCA4, sequencing of the entire ABCA4 gene or whole genome sequencing. ABCA4 c.4539 + 2028C > T, p.[= ,Arg1514Leufs*36] is a pathogenic deep intronic variant that results in a retina-specific 345-nucleotide pseudoexon inclusion. Through analysis of the Irish STGD1 cohort, 25 individuals across 18 pedigrees harbour ABCA4 c.4539 + 2028C > T and another pathogenic variant. This includes, to the best of our knowledge, the only two homozygous patients identified to date. This provides important evidence of variant pathogenicity for this deep intronic variant, highlighting the value of homozygotes for variant interpretation. 15 other heterozygous incidents of this variant in patients have been reported globally, indicating significant enrichment in the Irish population. We provide detailed genetic and clinical characterization of these patients, illustrating that ABCA4 c.4539 + 2028C > T is a variant of mild to intermediate severity. These results have important implications for unresolved STGD1 patients globally with approximately 10% of the population in some western countries claiming Irish heritage. This study exemplifies that detection and characterization of founder variants is a diagnostic imperative.

    Additional information

    supplemental material
  • Whitaker, K., & Guest, O. (2020). #bropenscience is broken science: Kirstie Whitaker and Olivia Guest ask how open ‘open science’ really is. The Psychologist, 33, 34-37.
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2009). Body-specific motor imagery of hand actions: Neural evidence from right- and left-handers. Frontiers in Human Neuroscience, 3: 39, pp. 39. doi:10.3389/neuro.09.039.2009.

    Abstract

    If motor imagery uses neural structures involved in action execution, then the neural correlates of imagining an action should differ between individuals who tend to execute the action differently. Here we report fMRI data showing that motor imagery is influenced by the way people habitually perform motor actions with their particular bodies; that is, motor imagery is ‘body-specific’ (Casasanto, 2009). During mental imagery for complex hand actions, activation of cortical areas involved in motor planning and execution was left-lateralized in right-handers but right-lateralized in left-handers. We conclude that motor imagery involves the generation of an action plan that is grounded in the participant’s motor habits, not just an abstract representation at the level of the action’s goal. People with different patterns of motor experience form correspondingly different neurocognitive representations of imagined actions.
  • Willems, R. M., & Hagoort, P. (2009). Broca's region: Battles are not won by ignoring half of the facts. Trends in Cognitive Sciences, 13(3), 101. doi:10.1016/j.tics.2008.12.001.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2009). Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language. Neuroimage, 47, 1992-2004. doi:10.1016/j.neuroimage.2009.05.066.

    Abstract

    Several studies indicate that both posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG) and left inferior frontal gyrus (LIFG) are involved in integrating information from different modalities. Here we investigated the respective roles of these two areas in integration of action and language information. We exploited the fact that the semantic relationship between language and different forms of action (i.e. co-speech gestures and pantomimes) is radically different. Speech and co-speech gestures are always produced together, and gestures are not unambiguously understood without speech. On the contrary, pantomimes are not necessarily produced together with speech and can be easily understood without speech. We presented speech together with these two types of communicative hand actions in matching or mismatching combinations to manipulate semantic integration load. Left and right pSTS/MTG were only involved in semantic integration of speech and pantomimes. Left IFG on the other hand was involved in integration of speech and co-speech gestures as well as of speech and pantomimes. Effective connectivity analyses showed that depending upon the semantic relationship between language and action, LIFG modulates activation levels in left pSTS.

    This suggests that integration in pSTS/MTG involves the matching of two input streams for which there is a relatively stable common object representation, whereas integration in LIFG is better characterized as the on-line construction of a new and unified representation of the input streams. In conclusion, pSTS/MTG and LIFG are differentially involved in multimodal integration, crucially depending upon the semantic relationship between the input streams.

    Additional information

    Supplementary table S1
  • Willems, R. M., & Hagoort, P. (2009). Hand preference influences neural correlates of action observation. Brain Research, 1269, 90-104. doi:10.1016/j.brainres.2009.02.057.

    Abstract

    It has been argued that we map observed actions onto our own motor system. Here we added to this issue by investigating whether hand preference influences the neural correlates of action observation of simple, essentially meaningless hand actions. Such an influence would argue for an intricate neural coupling between action production and action observation, which goes beyond effects of motor repertoire or explicit motor training, as has been suggested before. Indeed, parts of the human motor system exhibited a close coupling between action production and action observation. Ventral premotor and inferior and superior parietal cortices showed differential activation for left- and right-handers that was similar during action production as well as during action observation. This suggests that mapping observed actions onto the observer's own motor system is a core feature of action observation - at least for actions that do not have a clear goal or meaning. Basic differences in the way we act upon the world are not only reflected in neural correlates of action production, but can also influence the brain basis of action observation.
  • Willems, R. M., Nastase, S. A., & Milivojevic, B. (2020). Narratives for Neuroscience. Trends in Neurosciences, 43(5), 271-273. doi:10.1016/j.tins.2020.03.003.

    Abstract

    People organize and convey their thoughts according to narratives. However, neuroscientists are often reluctant to incorporate narrative stimuli into their experiments. We argue that narratives deserve wider adoption in human neuroscience because they tap into the brain’s native machinery for representing the world and provide rich variability for testing hypotheses.
  • Wilson, B., Spierings, M., Ravignani, A., Mueller, J. L., Mintz, T. H., Wijnen, F., Van der Kant, A., Smith, K., & Rey, A. (2020). Non‐adjacent dependency learning in humans and other animals. Topics in Cognitive Science, 12(3), 843-858. doi:10.1111/tops.12381.

    Abstract

    Learning and processing natural language requires the ability to track syntactic relationships between words and phrases in a sentence, which are often separated by intervening material. These nonadjacent dependencies can be studied using artificial grammar learning paradigms and structured sequence processing tasks. These approaches have been used to demonstrate that human adults, infants and some nonhuman animals are able to detect and learn dependencies between nonadjacent elements within a sequence. However, learning nonadjacent dependencies appears to be more cognitively demanding than detecting dependencies between adjacent elements, and only occurs in certain circumstances. In this review, we discuss different types of nonadjacent dependencies in language and in artificial grammar learning experiments, and how these differences might impact learning. We summarize different types of perceptual cues that facilitate learning, by highlighting the relationship between dependent elements bringing them closer together either physically, attentionally, or perceptually. Finally, we review artificial grammar learning experiments in human adults, infants, and nonhuman animals, and discuss how similarities and differences observed across these groups can provide insights into how language is learned across development and how these language‐related abilities might have evolved.
  • Winsvold, B. S., Palta, P., Eising, E., Page, C. M., The International Headache Genetics Consortium, Van den Maagdenberg, A. M. J. M., Palotie, A., & Zwart, J.-A. (2018). Epigenetic DNA methylation changes associated with headache chronification: A retrospective case-control study. Cephalalgia, 38(2), 312-322. doi:10.1177/0333102417690111.

    Abstract

    Background

    The biological mechanisms of headache chronification are poorly understood. We aimed to identify changes in DNA methylation associated with the transformation from episodic to chronic headache.
    Methods

    Participants were recruited from the population-based Norwegian HUNT Study. Thirty-six female headache patients who transformed from episodic to chronic headache between baseline and follow-up 11 years later were matched against 35 controls with episodic headache. DNA methylation was quantified at 485,000 CpG sites, and changes in methylation level at these sites were compared between cases and controls by linear regression analysis. Data were analyzed in two stages (Stages 1 and 2) and in a combined meta-analysis.
    Results

    None of the top 20 CpG sites identified in Stage 1 replicated in Stage 2 after multiple testing correction. In the combined meta-analysis the strongest associated CpG sites were related to SH2D5 and NPTX2, two brain-expressed genes involved in the regulation of synaptic plasticity. Functional enrichment analysis pointed to processes including calcium ion binding and estrogen receptor pathways.
    Conclusion

    In this first genome-wide study of DNA methylation in headache chronification several potentially implicated loci and processes were identified. The study exemplifies the use of prospectively collected population cohorts to search for epigenetic mechanisms of disease
  • Winter, B., Perlman, M., & Majid, A. (2018). Vision dominates in perceptual language: English sensory vocabulary is optimized for usage. Cognition, 179, 213-220. doi:10.1016/j.cognition.2018.05.008.

    Abstract

    Researchers have suggested that the vocabularies of languages are oriented towards the communicative needs of language users. Here, we provide evidence demonstrating that the higher frequency of visual words in a large variety of English corpora is reflected in greater lexical differentiation—a greater number of unique words—for the visual domain in the English lexicon. In comparison, sensory modalities that are less frequently talked about, particularly taste and smell, show less lexical differentiation. In addition, we show that even though sensory language can be expected to change across historical time and between contexts of use (e.g., spoken language versus fiction), the pattern of visual dominance is a stable property of the English language. Thus, we show that across the board, precisely those semantic domains that are more frequently talked about are also more lexically differentiated, for perceptual experiences. This correlation between type and token frequencies suggests that the sensory lexicon of English is geared towards communicative efficiency.
  • Witteman, J., Karaseva, E., Schiller, N. O., & McQueen, J. M. (2023). What does successful L2 vowel acquisition depend on? A conceptual replication. In R. Skarnitzl, & J. Volín (Eds.), Proceedings of the 20th International Congress of the Phonetic Sciences (ICPhS 2023) (pp. 928-931). Prague: Guarant International.

    Abstract

    It has been suggested that individual variation in vowel compactness of the native language (L1) and the distance between L1 vowels and vowels in the second language (L2) predict successful L2 vowel acquisition. Moreover, general articulatory skills have been proposed to account for variation in vowel compactness. In the present work, we conceptually replicate a previous study to test these hypotheses with a large sample size, a new language pair and a
    new vowel pair. We find evidence that individual variation in L1 vowel compactness has opposing effects for two different vowels. We do not find evidence that individual variation in L1 compactness
    is explained by general articulatory skills. We conclude that the results found previously might be specific to sub-groups of L2 learners and/or specific sub-sets of vowel pairs.
  • Wittenburg, P., Lautenschlager, M., Thiemann, H., Baldauf, C., & Trilsbeek, P. (2020). FAIR Practices in Europe. Data Intelligence, 2(1-2), 257-263. doi:10.1162/dint_a_00048.

    Abstract

    Institutions driving fundamental research at the cutting edge such as for example from the Max Planck Society (MPS) took steps to optimize data management and stewardship to be able to address new scientific questions. In this paper we selected three institutes from the MPS from the areas of humanities, environmental sciences and natural sciences as examples to indicate the efforts to integrate large amounts of data from collaborators worldwide to create a data space that is ready to be exploited to get new insights based on data intensive science methods. For this integration the typical challenges of fragmentation, bad quality and also social differences had to be overcome. In all three cases, well-managed repositories that are driven by the scientific needs and harmonization principles that have been agreed upon in the community were the core pillars. It is not surprising that these principles are very much aligned with what have now become the FAIR principles. The FAIR principles confirm the correctness of earlier decisions and their clear formulation identified the gaps which the projects need to address.
  • Wnuk, E., Laophairoj, R., & Majid, A. (2020). Smell terms are not rara: A semantic investigation of odor vocabulary in Thai. Linguistics, 58(4), 937-966. doi:10.1515/ling-2020-0009.
  • Woensdregt, M., & Dingemanse, M. (2020). Other-initiated repair can facilitate the emergence of compositional language. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 474-476). Nijmegen: The Evolution of Language Conferences.
  • Wong, M. M. K., Hoekstra, S. D., Vowles, J., Watson, L. M., Fuller, G., Németh, A. H., Cowley, S. A., Ansorge, O., Talbot, K., & Becker, E. B. E. (2018). Neurodegeneration in SCA14 is associated with increased PKCγ kinase activity, mislocalization and aggregation. Acta Neuropathologica Communications, 6: 99. doi:10.1186/s40478-018-0600-7.

    Abstract

    Spinocerebellar ataxia type 14 (SCA14) is a subtype of the autosomal dominant cerebellar ataxias that is characterized by slowly progressive cerebellar dysfunction and neurodegeneration. SCA14 is caused by mutations in the PRKCG gene, encoding protein kinase C gamma (PKCγ). Despite the identification of 40 distinct disease-causing mutations in PRKCG, the pathological mechanisms underlying SCA14 remain poorly understood. Here we report the molecular neuropathology of SCA14 in post-mortem cerebellum and in human patient-derived induced pluripotent stem cells (iPSCs) carrying two distinct SCA14 mutations in the C1 domain of PKCγ, H36R and H101Q. We show that endogenous expression of these mutations results in the cytoplasmic mislocalization and aggregation of PKCγ in both patient iPSCs and cerebellum. PKCγ aggregates were not efficiently targeted for degradation. Moreover, mutant PKCγ was found to be hyper-activated, resulting in increased substrate phosphorylation. Together, our findings demonstrate that a combination of both, loss-of-function and gain-of-function mechanisms are likely to underlie the pathogenesis of SCA14, caused by mutations in the C1 domain of PKCγ. Importantly, SCA14 patient iPSCs were found to accurately recapitulate pathological features observed in post-mortem SCA14 cerebellum, underscoring their potential as relevant disease models and their promise as future drug discovery tools.

    Additional information

    additional file
  • Xiao, M., Kong, X., Liu, J., & Ning, J. (2009). TMBF: Bloom filter algorithms of time-dependent multi bit-strings for incremental set. In Proceedings of the 2009 International Conference on Ultra Modern Telecommunications & Workshops.

    Abstract

    Set is widely used as a kind of basic data structure. However, when it is used for large scale data set the cost of storage, search and transport is overhead. The bloom filter uses a fixed size bit string to represent elements in a static set, which can reduce storage space and search cost that is a fixed constant. The time-space efficiency is achieved at the cost of a small probability of false positive in membership query. However, for many applications the space savings and locating time constantly outweigh this drawback. Dynamic bloom filter (DBF) can support concisely representation and approximate membership queries of dynamic set instead of static set. It has been proved that DBF not only possess the advantage of standard bloom filter, but also has better features when dealing with dynamic set. This paper proposes a time-dependent multiple bit-strings bloom filter (TMBF) which roots in the DBF and targets on dynamic incremental set. TMBF uses multiple bit-strings in time order to present a dynamic increasing set and uses backward searching to test whether an element is in a set. Based on the system logs from a real P2P file sharing system, the evaluation shows a 20% reduction in searching cost compared to DBF.
  • Xiong, K., Verdonschot, R. G., & Tamaoka, K. (2020). The time course of brain activity in reading identical cognates: An ERP study of Chinese - Japanese bilinguals. Journal of Neurolinguistics, 55: 100911. doi:10.1016/j.jneuroling.2020.100911.

    Abstract

    Previous studies suggest that bilinguals' lexical access is language non-selective, especially for orthographically identical translation equivalents across languages (i.e., identical cognates). The present study investigated how such words (e.g., meaning "school" in both Chinese and Japanese) are processed in the (late) Chinese - Japanese bilingual brain. Using an L2-Japanese lexical decision task, both behavioral and electrophysiological data were collected. Reaction times (RTs), as well as the N400 component, showed that cognates are more easily recognized than non-cognates. Additionally, an early component (i.e., the N250), potentially reflecting activation at the word-form level, was also found. Cognates elicited a more positive N250 than non-cognates in the frontal region, indicating that the cognate facilitation effect occurred at an early stage of word formation for languages with logographic scripts.
  • Yang, J., Van den Bosch, A., & Frank, S. L. (2020). Less is Better: A cognitively inspired unsupervised model for language segmentation. In M. Zock, E. Chersoni, A. Lenci, & E. Santus (Eds.), Proceedings of the Workshop on the Cognitive Aspects of the Lexicon ( 28th International Conference on Computational Linguistics) (pp. 33-45). Stroudsburg: Association for Computational Linguistics.

    Abstract

    Language users process utterances by segmenting them into many cognitive units, which vary in their sizes and linguistic levels. Although we can do such unitization/segmentation easily, its cognitive mechanism is still not clear. This paper proposes an unsupervised model, Less-is-Better (LiB), to simulate the human cognitive process with respect to language unitization/segmentation. LiB follows the principle of least effort and aims to build a lexicon which minimizes the number of unit tokens (alleviating the effort of analysis) and number of unit types (alleviating the effort of storage) at the same time on any given corpus. LiB’s workflow is inspired by empirical cognitive phenomena. The design makes the mechanism of LiB cognitively plausible and the computational requirement light-weight. The lexicon generated by LiB performs the best among different types of lexicons (e.g. ground-truth words) both from an information-theoretical view and a cognitive view, which suggests that the LiB lexicon may be a plausible proxy of the mental lexicon.

    Additional information

    full text via ACL website
  • Yang, W., Chan, A., Chang, F., & Kidd, E. (2020). Four-year-old Mandarin-speaking children’s online comprehension of relative clauses. Cognition, 196: 104103. doi:10.1016/j.cognition.2019.104103.

    Abstract

    A core question in language acquisition is whether children’s syntactic processing is experience-dependent and language-specific, or whether it is governed by abstract, universal syntactic machinery. We address this question by presenting corpus and on-line processing dat a from children learning Mandarin Chinese, a language that has been important in debates about the universality of parsing processes. The corpus data revealed that two different relative clause constructions in Mandarin are differentially used to modify syntactic subjects and objects. In the experiment, 4-year-old children’s eye-movements were recorded as they listened to the two RC construction types (e.g., Can you pick up the pig that pushed the sheep?). A permutation analysis showed that children’s ease of comprehension was closely aligned with the distributional frequencies, suggesting syntactic processing preferences are shaped by the input experience of these constructions.

    Additional information

    1-s2.0-S001002771930277X-mmc1.pdf
  • Yang, J., Cai, Q., & Tian, X. (2020). How do we segment text? Two-stage chunking operation in reading. eNeuro, 7(3): ENEURO.0425-19.2020. doi:10.1523/ENEURO.0425-19.2020.

    Abstract

    Chunking in language comprehension is a process that segments continuous linguistic input into smaller chunks that are in the reader’s mental lexicon. Effective chunking during reading facilitates disambiguation and enhances efficiency for comprehension. However, the chunking mechanisms remain elusive, especially in reading given that information arrives simultaneously yet the written systems may not have explicit cues for labeling boundaries such as Chinese. What are the mechanisms of chunking that mediates the reading of the text that contains hierarchical information? We investigated this question by manipulating the lexical status of the chunks at distinct levels in four-character Chinese strings, including the two-character local chunk and four-character global chunk. Male and female human participants were asked to make lexical decisions on these strings in a behavioral experiment, followed by a passive reading task when their electroencephalography (EEG) was recorded. The behavioral results showed that the lexical decision time of lexicalized two-character local chunks was influenced by the lexical status of the four-character global chunk, but not vice versa, which indicated the processing of global chunks possessed priority over the local chunks. The EEG results revealed that familiar lexical chunks were detected simultaneously at both levels and further processed in a different temporal order – the onset of lexical access for the global chunks was earlier than that of local chunks. These consistent results suggest a two-stage operation for chunking in reading–– the simultaneous detection of familiar lexical chunks at multiple levels around 100 ms followed by recognition of chunks with global precedence.
  • Yang, J., Zhu, H., & Tian, X. (2018). Group-level multivariate analysis in EasyEEG toolbox: Examining the temporal dynamics using topographic responses. Frontiers in Neuroscience, 12: 468. doi:10.3389/fnins.2018.00468.

    Abstract

    Electroencephalography (EEG) provides high temporal resolution cognitive information from non-invasive recordings. However, one of the common practices-using a subset of sensors in ERP analysis is hard to provide a holistic and precise dynamic results. Selecting or grouping subsets of sensors may also be subject to selection bias, multiple comparison, and further complicated by individual differences in the group-level analysis. More importantly, changes in neural generators and variations in response magnitude from the same neural sources are difficult to separate, which limit the capacity of testing different aspects of cognitive hypotheses. We introduce EasyEEG, a toolbox that includes several multivariate analysis methods to directly test cognitive hypotheses based on topographic responses that include data from all sensors. These multivariate methods can investigate effects in the dimensions of response magnitude and topographic patterns separately using data in the sensor space, therefore enable assessing neural response dynamics. The concise workflow and the modular design provide user-friendly and programmer-friendly features. Users of all levels can benefit from the open-sourced, free EasyEEG to obtain a straightforward solution for efficient processing of EEG data and a complete pipeline from raw data to final results for publication.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., & Hino, Y. (2020). The influence of orthography on speech production: Evidence from masked priming in word-naming and picture-naming tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(8), 1570-1589. doi:10.1037/xlm0000829.

    Abstract

    In a masked priming word-naming task, a facilitation due to the initial-segmental sound overlap for 2-character kanji prime-target pairs was affected by certain orthographic properties (Yoshihara, Nakayama, Verdonschot, & Hino, 2017). That is, the facilitation that was due to the initial mora overlap occurred only when the mora was the whole pronunciation of their initial kanji characters (i.e., match pairs; e.g., /ka-se.ki/-/ka-rjo.ku/). When the shared initial mora was only a part of the kanji characters' readings, however, there was no facilitation (i.e., mismatch pairs; e.g., /ha.tu-a.N/-/ha.ku-bu.tu/). In the present study, we used a masked priming picture-naming task to investigate whether the previous results were relevant only when the orthography of targets is visually presented. In Experiment 1. the main findings of our word-naming task were fully replicated in a picture-naming task. In Experiments 2 and 3. the absence of facilitation for the mismatch pairs were confirmed with a new set of stimuli. On the other hand, a significant facilitation was observed for the match pairs that shared the 2 initial morae (in Experiment 4), which was again consistent with the results of our word-naming study. These results suggest that the orthographic properties constrain the phonological expression of masked priming for kanji words across 2 tasks that are likely to differ in how phonology is retrieved. Specifically, we propose that orthography of a word is activated online and constrains the phonological encoding processes in these tasks.
  • Zhang, Y., Amatuni, A., Crain, E., & Yu, C. (2020). Seeking meaning: Examining a cross-situational solution to learn action verbs using human simulation paradigm. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 2854-2860). Montreal, QB: Cognitive Science Society.

    Abstract

    To acquire the meaning of a verb, language learners not only need to find the correct mapping between a specific verb and an action or event in the world, but also infer the underlying relational meaning that the verb encodes. Most verb naming instances in naturalistic contexts are highly ambiguous as many possible actions can be embedded in the same scenario and many possible verbs can be used to describe those actions. To understand whether learners can find the correct verb meaning from referentially ambiguous learning situations, we conducted three experiments using the Human Simulation Paradigm with adult learners. Our results suggest that although finding the right verb meaning from one learning instance is hard, there is a statistical solution to this problem. When provided with multiple verb learning instances all referring to the same verb, learners are able to aggregate information across situations and gradually converge to the correct semantic space. Even in cases where they may not guess the exact target verb, they can still discover the right meaning by guessing a similar verb that is semantically close to the ground truth.
  • Zhang, Y., Ding, R., Frassinelli, D., Tuomainen, J., Klavinskis-Whiting, S., & Vigliocco, G. (2023). The role of multimodal cues in second language comprehension. Scientific Reports, 13: 20824. doi:10.1038/s41598-023-47643-2.

    Abstract

    In face-to-face communication, multimodal cues such as prosody, gestures, and mouth movements can play a crucial role in language processing. While several studies have addressed how these cues contribute to native (L1) language processing, their impact on non-native (L2) comprehension is largely unknown. Comprehension of naturalistic language by L2 comprehenders may be supported by the presence of (at least some) multimodal cues, as these provide correlated and convergent information that may aid linguistic processing. However, it is also the case that multimodal cues may be less used by L2 comprehenders because linguistic processing is more demanding than for L1 comprehenders, leaving more limited resources for the processing of multimodal cues. In this study, we investigated how L2 comprehenders use multimodal cues in naturalistic stimuli (while participants watched videos of a speaker), as measured by electrophysiological responses (N400) to words, and whether there are differences between L1 and L2 comprehenders. We found that prosody, gestures, and informative mouth movements each reduced the N400 in L2, indexing easier comprehension. Nevertheless, L2 participants showed weaker effects for each cue compared to L1 comprehenders, with the exception of meaningful gestures and informative mouth movements. These results show that L2 comprehenders focus on specific multimodal cues – meaningful gestures that support meaningful interpretation and mouth movements that enhance the acoustic signal – while using multimodal cues to a lesser extent than L1 comprehenders overall.

    Additional information

    supplementary materials
  • Wu, S., Zhao, J., de Villiers, J., Liu, X. L., Rolfhus, E., Sun, X. N., Li, X. Y., Pan, H., Wang, H. W., Zhu, Q., Dong, Y. Y., Zhang, Y. T., & Jiang, F. (2023). Prevalence, co-occurring difficulties, and risk factors of developmental language disorder: First evidence for Mandarin-speaking children in a population-based study. The Lancet Regional Health - Western Pacific, 34: 100713. doi:10.1016/j.lanwpc.2023.100713.

    Abstract

    Background: Developmental language disorder (DLD) is a condition that significantly affects children's achievement but has been understudied. We aim to estimate the prevalence of DLD in Shanghai, compare the co-occurrence of difficulties between children with DLD and those with typical development (TD), and investigate the early risk factors for DLD.

    Methods: We estimated DLD prevalence using data from a population-based survey with a cluster random sampling design in Shanghai, China. A subsample of children (aged 5-6 years) received an onsite evaluation, and each child was categorized as TD or DLD. The proportions of children with socio-emotional behavior (SEB) difficulties, low non-verbal IQ (NVIQ), and poor school readiness were calculated among children with TD and DLD. We used multiple imputation to address the missing values of risk factors. Univariate and multivariate regression models adjusted with sampling weights were used to estimate the correlation of each risk factor with DLD.

    Findings: Of 1082 children who were approached for the onsite evaluation, 974 (90.0%) completed the language ability assessments, of whom 74 met the criteria for DLD, resulting in a prevalence of 8.5% (95% CI 6.3-11.5) when adjusted with sampling weights. Compared with TD children, children with DLD had higher rates of concurrent difficulties, including SEB (total difficulties score at-risk: 156 [17.3%] of 900 TD vs. 28 [37.8%] of 74 DLD, p < 0.0001), low NVIQ (3 [0.3%] of 900 TD vs. 8 [10.8%] of 74 DLD, p < 0.0001), and poor school readiness (71 [7.9%] of 900 TD vs. 13 [17.6%] of 74 DLD, p = 0.0040). After accounting for all other risk factors, a higher risk of DLD was associated with a lack of parent-child interaction diversity (adjusted odds ratio [aOR] = 3.08, 95% CI = 1.29-7.37; p = 0.012) and lower kindergarten levels (compared to demonstration and first level: third level (aOR = 6.15, 95% CI = 1.92-19.63; p = 0.0020)).

    Interpretation: The prevalence of DLD and its co-occurrence with other difficulties suggest the need for further attention. Family and kindergarten factors were found to contribute to DLD, suggesting that multi-sector coordinated efforts are needed to better identify and serve DLD populations at home, in schools, and in clinical settings.

    Funding: The study was supported by Shanghai Municipal Education Commission (No. 2022you1-2, D1502), the Innovative Research Team of High-level Local Universities in Shanghai (No. SHSMU-ZDCX20211900), Shanghai Municipal Health Commission (No.GWV-10.1-XK07), and the National Key Research and Development Program of China (No. 2022YFC2705201).
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2020). Language selection contributes to intrusion errors in speaking: Evidence from picture naming. Bilingualism: Language and Cognition, 23, 788-800. doi:10.1017/S1366728919000683.

    Abstract

    Bilinguals usually select the right language to speak for the particular context they are in, but sometimes the nontarget language intrudes. Despite a large body of research into language selection and language control, it remains unclear where intrusion errors originate from. These errors may be due to incorrect selection of the nontarget language at the conceptual level, or be a consequence of erroneous word selection (despite correct language selection) at the lexical level. We examined the former possibility in two language switching experiments using a manipulation that supposedly affects language selection on the conceptual level, namely whether the conversational language context was associated with the target language (congruent) or with the alternative language (incongruent) on a trial. Both experiments showed that language intrusion errors occurred more often in incongruent than in congruent contexts, providing converging evidence that language selection during concept preparation is one driving force behind language intrusion.
  • Zheng, X., Roelofs, A., Erkan, H., & Lemhöfer, K. (2020). Dynamics of inhibitory control during bilingual speech production: An electrophysiological study. Neuropsychologia, 140: 107387. doi:10.1016/j.neuropsychologia.2020.107387.

    Abstract

    Bilingual speakers have to control their languages to avoid interference, which may be achieved by enhancing the target language and/or inhibiting the nontarget language. Previous research suggests that bilinguals use inhibition (e.g., Jackson et al., 2001), which should be reflected in the N2 component of the event-related potential (ERP) in the EEG. In the current study, we investigated the dynamics of inhibitory control by measuring the N2 during language switching and repetition in bilingual picture naming. Participants had to name pictures in Dutch or English depending on the cue. A run of same-language trials could be short (two or three trials) or long (five or six trials). We assessed whether RTs and N2 changed over the course of same-language runs, and at a switch between languages. Results showed that speakers named pictures more quickly late as compared to early in a run of same-language trials. Moreover, they made a language switch more quickly after a long run than after a short run. This run-length effect was only present in the first language (L1), not in the second language (L2). In ERPs, we observed a widely distributed switch effect in the N2, which was larger after a short run than after a long run. This effect was only present in the L2, not in the L1, although the difference was not significant between languages. In contrast, the N2 was not modulated during a same-language run. Our results suggest that the nontarget language is inhibited at a switch, but not during the repeated use of the target language.

    Additional information

    Data availability

    Files private

    Request files
  • Zheng, X., Roelofs, A., Farquhar, J., & Lemhöfer, K. (2018). Monitoring of language selection errors in switching: Not all about conflict. PLoS One, 13(11): e0200397. doi:10.1371/journal.pone.0200397.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. To investigate how bilinguals monitor their speech errors and control their languages in use, we recorded event-related potentials (ERPs) in unbalanced Dutch-English bilingual speakers in a cued language-switching task. We tested the conflict-based monitoring model of Nozari and colleagues by investigating the error-related negativity (ERN) and comparing the effects of the two switching directions (i.e., to the first language, L1 vs. to the second language, L2). Results show that the speakers made more language selection errors when switching from their L2 to the L1 than vice versa. In the EEG, we observed a robust ERN effect following language selection errors compared to correct responses, reflecting monitoring of speech errors. Most interestingly, the ERN effect was enlarged when the speakers were switching to their L2 (less conflict) compared to switching to the L1 (more conflict). Our findings do not support the conflict-based monitoring model. We discuss an alternative account in terms of error prediction and reinforcement learning.
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2018). Language selection errors in switching: language priming or cognitive control? Language, Cognition and Neuroscience, 33(2), 139-147. doi:10.1080/23273798.2017.1363401.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. We examined the relative contribution of top-down cognitive control and bottom-up language priming to these errors. Unbalanced Dutch-English bilinguals named pictures and were cued to switch between languages under time pressure. We also manipulated the number of same-language trials before a switch (long vs. short runs). Results show that speakers made more language selection errors when switching from their second language (L2) to the first language (L1) than vice versa. Furthermore, they made more errors when switching to the L1 after a short compared to a long run of L2 trials. In the reverse switching direction (L1 to L2), run length had no effect. These findings are most compatible with an account of language selection errors that assigns a strong role to top-down processes of cognitive control.

    Additional information

    plcp_a_1363401_sm2537.docx
  • Zioga, I., Weissbart, H., Lewis, A. G., Haegens, S., & Martin, A. E. (2023). Naturalistic spoken language comprehension is supported by alpha and beta oscillations. The Journal of Neuroscience, 43(20), 3718-3732. doi:10.1523/JNEUROSCI.1500-22.2023.

    Abstract

    Brain oscillations are prevalent in all species and are involved in numerous perceptual operations. α oscillations are thought to facilitate processing through the inhibition of task-irrelevant networks, while β oscillations are linked to the putative reactivation of content representations. Can the proposed functional role of α and β oscillations be generalized from low-level operations to higher-level cognitive processes? Here we address this question focusing on naturalistic spoken language comprehension. Twenty-two (18 female) Dutch native speakers listened to stories in Dutch and French while MEG was recorded. We used dependency parsing to identify three dependency states at each word: the number of (1) newly opened dependencies, (2) dependencies that remained open, and (3) resolved dependencies. We then constructed forward models to predict α and β power from the dependency features. Results showed that dependency features predict α and β power in language-related regions beyond low-level linguistic features. Left temporal, fundamental language regions are involved in language comprehension in α, while frontal and parietal, higher-order language regions, and motor regions are involved in β. Critically, α- and β-band dynamics seem to subserve language comprehension tapping into syntactic structure building and semantic composition by providing low-level mechanistic operations for inhibition and reactivation processes. Because of the temporal similarity of the α-β responses, their potential functional dissociation remains to be elucidated. Overall, this study sheds light on the role of α and β oscillations during naturalistic spoken language comprehension, providing evidence for the generalizability of these dynamics from perceptual to complex linguistic processes.
  • Zoefel, B., Ten Oever, S., & Sack, A. T. (2018). The involvement of endogenous neural oscillations in the processing of rhythmic input: More than a regular repetition of evoked neural responses. Frontiers in Neuroscience, 12: 95. doi:10.3389/fnins.2018.00095.

    Abstract

    It is undisputed that presenting a rhythmic stimulus leads to a measurable brain response that follows the rhythmic structure of this stimulus. What is still debated, however, is the question whether this brain response exclusively reflects a regular repetition of evoked responses, or whether it also includes entrained oscillatory activity. Here we systematically present evidence in favor of an involvement of entrained neural oscillations in the processing of rhythmic input while critically pointing out which questions still need to be addressed before this evidence could be considered conclusive. In this context, we also explicitly discuss the potential functional role of such entrained oscillations, suggesting that these stimulus-aligned oscillations reflect, and serve as, predictive processes, an idea often only implicitly assumed in the literature.
  • Zora, H., Rudner, M., & Montell Magnusson, A. (2020). Concurrent affective and linguistic prosody with the same emotional valence elicits a late positive ERP response. European Journal of Neuroscience, 51(11), 2236-2249. doi:10.1111/ejn.14658.

    Abstract

    Change in linguistic prosody generates a mismatch negativity response (MMN), indicating neural representation of linguistic prosody, while change in affective prosody generates a positive response (P3a), reflecting its motivational salience. However, the neural response to concurrent affective and linguistic prosody is unknown. The present paper investigates the integration of these two prosodic features in the brain by examining the neural response to separate and concurrent processing by electroencephalography (EEG). A spoken pair of Swedish words—[ˈfɑ́ːsɛn] phase and [ˈfɑ̀ːsɛn] damn—that differed in emotional semantics due to linguistic prosody was presented to 16 subjects in an angry and neutral affective prosody using a passive auditory oddball paradigm. Acoustically matched pseudowords—[ˈvɑ́ːsɛm] and [ˈvɑ̀ːsɛm]—were used as controls. Following the constructionist concept of emotions, accentuating the conceptualization of emotions based on language, it was hypothesized that concurrent affective and linguistic prosody with the same valence—angry [ˈfɑ̀ːsɛn] damn—would elicit a unique late EEG signature, reflecting the temporal integration of affective voice with emotional semantics of prosodic origin. In accordance, linguistic prosody elicited an MMN at 300–350 ms, and affective prosody evoked a P3a at 350–400 ms, irrespective of semantics. Beyond these responses, concurrent affective and linguistic prosody evoked a late positive component (LPC) at 820–870 ms in frontal areas, indicating the conceptualization of affective prosody based on linguistic prosody. This study provides evidence that the brain does not only distinguish between these two functions of prosody but also integrates them based on language and experience.
  • Zora, H., Wester, J. M., & Csépe, V. (2023). Predictions about prosody facilitate lexical access: Evidence from P50/N100 and MMN components. International Journal of Psychophysiology, 194: 112262. doi:10.1016/j.ijpsycho.2023.112262.

    Abstract

    Research into the neural foundation of perception asserts a model where top-down predictions modulate the bottom-up processing of sensory input. Despite becoming increasingly influential in cognitive neuroscience, the precise account of this predictive coding framework remains debated. In this study, we aim to contribute to this debate by investigating how predictions about prosody facilitate speech perception, and to shed light especially on lexical access influenced by simultaneous predictions in different domains, inter alia, prosodic and semantic. Using a passive auditory oddball paradigm, we examined neural responses to prosodic changes, leading to a semantic change as in Dutch nouns canon [ˈkaːnɔn] ‘cannon’ vs kanon [kaːˈnɔn] ‘canon’, and used acoustically identical pseudowords as controls. Results from twenty-eight native speakers of Dutch (age range 18–32 years) indicated an enhanced P50/N100 complex to prosodic change in pseudowords as well as an MMN response to both words and pseudowords. The enhanced P50/N100 response to pseudowords is claimed to indicate that all relevant auditory information is still processed by the brain, whereas the reduced response to words might reflect the suppression of information that has already been encoded. The MMN response to pseudowords and words, on the other hand, is best justified by the unification of previously established prosodic representations with sensory and semantic input respectively. This pattern of results is in line with the predictive coding framework acting on multiple levels and is of crucial importance to indicate that predictions about linguistic prosodic information are utilized by the brain as early as 50 ms.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2023). In conversation, answers are remembered better than the questions themselves. Journal of Experimental Psychology: Learning, Memory, and Cognition, 49(12), 1971-1988. doi:10.1037/xlm0001292.

    Abstract

    Language is used in communicative contexts to identify and successfully transmit new information that should be later remembered. In three studies, we used question–answer pairs, a naturalistic device for focusing information, to examine how properties of conversations inform later item memory. In Experiment 1, participants viewed three pictures while listening to a recorded question–answer exchange between two people about the locations of two of the displayed pictures. In a memory recognition test conducted online a day later, participants recognized the names of pictures that served as answers more accurately than the names of pictures that appeared as questions. This suggests that this type of focus indeed boosts memory. In Experiment 2, participants listened to the same items embedded in declarative sentences. There was a reduced memory benefit for the second item, confirming the role of linguistic focus on later memory beyond a simple serial-position effect. In Experiment 3, two participants asked and answered the same questions about objects in a dialogue. Here, answers continued to receive a memory benefit, and this focus effect was accentuated by language production such that information-seekers remembered the answers to their questions better than information-givers remembered the questions they had been asked. Combined, these studies show how people’s memory for conversation is modulated by the referential status of the items mentioned and by the speaker’s roles of the conversation participants.
  • Zuidema, W., French, R. M., Alhama, R. G., Ellis, K., O'Donnell, T. J. O., Sainburgh, T., & Gentner, T. Q. (2020). Five ways in which computational modeling can help advance cognitive science: Lessons from artificial grammar learning. Topics in Cognitive Science, 12(3), 925-941. doi:10.1111/tops.12474.

    Abstract

    There is a rich tradition of building computational models in cognitive science, but modeling, theoretical, and experimental research are not as tightly integrated as they could be. In this paper, we show that computational techniques—even simple ones that are straightforward to use—can greatly facilitate designing, implementing, and analyzing experiments, and generally help lift research to a new level. We focus on the domain of artificial grammar learning, and we give five concrete examples in this domain for (a) formalizing and clarifying theories, (b) generating stimuli, (c) visualization, (d) model selection, and (e) exploring the hypothesis space.
  • Zwitserlood, I. (2009). Het Corpus NGT. Levende Talen Magazine, 6, 44-45.

    Abstract

    The Corpus NGT
  • Zwitserlood, I. (2009). Het Corpus NGT en de dagelijkse lespraktijk (1). Levende Talen Magazine, 8, 40-41.

Share this page