Publications

Displaying 1401 - 1459 of 1459
  • Warner, N., & Weber, A. (2001). Perception of epenthetic stops. Journal of Phonetics, 29(1), 53-87. doi:10.1006/jpho.2001.0129.

    Abstract

    In processing connected speech, listeners must parse a highly variable signal. We investigate processing of a particular type of production variability, namely epenthetic stops between nasals and obstruents. Using a phoneme monitoring task and a dictation task, we test listeners' perception of epenthetic stops (which are not part of the string of segments intended by the speaker). We confirm that the epenthetic stop perceived is the one predicted by articulatory accounts of how such stops are produced, and that the likelihood of an epenthetic stop being perceived as a real stop is related to the strength of acoustic cues in the signal. We show that the probability of listeners mis-parsing epenthetic stops as real is influenced by language-specific syllable structure constraints, and depends on processing demands. We further show, through reaction time data, that even when epenthetic stops are perceived, they impose a greater processing load than stops which were intended by the speaker. These results show that processing of phonetic variability is affected by several factors, including language-specific phonology, even though the mis-timing of articulations that creates epenthetic stops is universally possible.
  • Warner, N., Jongman, A., Cutler, A., & Mücke, D. (2001). The phonological status of Dutch epenthetic schwa. Phonology, 18, 387-420. doi:10.1017/S0952675701004213.

    Abstract

    In this paper, we use articulatory measures to determine whether Dutch schwa epenthesis is an abstract phonological process or a concrete phonetic process depending on articulatory timing. We examine tongue position during /l/ before underlying schwa and epenthetic schwa and in coda position. We find greater tip raising before both types of schwa, indicating light /l/ before schwa and dark /l/ in coda position. We argue that the ability of epenthetic schwa to condition the /l/ alternation shows that Dutch schwa epenthesis is an abstract phonological process involving insertion of some unit, and cannot be accounted for within Articulatory Phonology.
  • Warner, N., Jongman, A., Mucke, D., & Cutler, A. (2001). The phonological status of schwa insertion in Dutch: An EMA study. In B. Maassen, W. Hulstijn, R. Kent, H. Peters, & P. v. Lieshout (Eds.), Speech motor control in normal and disordered speech: 4th International Speech Motor Conference (pp. 86-89). Nijmegen: Vantilt.

    Abstract

    Articulatory data are used to address the question of whether Dutch schwa insertion is a phonological or a phonetic process. By investigating tongue tip raising and dorsal lowering, we show that /l/ when it appears before inserted schwa is a light /l/, just as /l/ before an underlying schwa is, and unlike the dark /l/ before a consonant in non-insertion productions of the same words. The fact that inserted schwa can condition the light/dark /l/ alternation shows that schwa insertion involves the phonological insertion of a segment rather than phonetic adjustments to articulations.
  • Warner, N., & Arai, T. (2001). The role of the mora in the timing of spontaneous Japanese speech. The Journal of the Acoustical Society of America, 109, 1144-1156. doi:10.1121/1.1344156.

    Abstract

    This study investigates whether the mora is used in controlling timing in Japanese speech, or is instead a structural unit in the language not involved in timing. Unlike most previous studies of mora-timing in Japanese, this article investigates timing in spontaneous speech. Predictability of word duration from number of moras is found to be much weaker than in careful speech. Furthermore, the number of moras predicts word duration only slightly better than number of segments. Syllable structure also has a significant effect on word duration. Finally, comparison of the predictability of whole words and arbitrarily truncated words shows better predictability for truncated words, which would not be possible if the truncated portion were compensating for remaining moras. The results support an accumulative model of variance with a final lengthening effect, and do not indicate the presence of any compensation related to mora-timing. It is suggested that the rhythm of Japanese derives from several factors about the structure of the language, not from durational compensation.
  • Warren, C. M., Tona, K. D., Ouwekerk, L., Van Paridon, J., Poletiek, F. H., Bosch, J. A., & Nieuwenhuis, S. (2019). The neuromodulatory and hormonal effects of transcutaneous vagus nerve stimulation as evidenced by salivary alpha amylase, salivary cortisol, pupil diameter, and the P3 event-related potential. Brain Stimulation, 12(3), 635-642. doi:10.1016/j.brs.2018.12.224.

    Abstract

    Background

    Transcutaneous vagus nerve stimulation (tVNS) is a new, non-invasive technique being investigated as an intervention for a variety of clinical disorders, including epilepsy and depression. It is thought to exert its therapeutic effect by increasing central norepinephrine (NE) activity, but the evidence supporting this notion is limited.
    Objective

    In order to test for an impact of tVNS on psychophysiological and hormonal indices of noradrenergic function, we applied tVNS in concert with assessment of salivary alpha amylase (SAA) and cortisol, pupil size, and electroencephalograph (EEG) recordings.
    Methods

    Across three experiments, we applied real and sham tVNS to 61 healthy participants while they performed a set of simple stimulus-discrimination tasks. Before and after the task, as well as during one break, participants provided saliva samples and had their pupil size recorded. EEG was recorded throughout the task. The target for tVNS was the cymba conchae, which is heavily innervated by the auricular branch of the vagus nerve. Sham stimulation was applied to the ear lobe.
    Results

    P3 amplitude was not affected by tVNS (Experiment 1A: N=24; Experiment 1B: N=20; Bayes factor supporting null model=4.53), nor was pupil size (Experiment 2: N=16; interaction of treatment and time: p=0.79). However, tVNS increased SAA (Experiments 1A and 2: N=25) and attenuated the decline of salivary cortisol compared to sham (Experiment 2: N=17), as indicated by significant interactions involving treatment and time (p=.023 and p=.040, respectively).
    Conclusion

    These findings suggest that tVNS modulates hormonal indices but not psychophysiological indices of noradrenergic function.
  • Wassenaar, M., & Hagoort, P. (2001). Het matchen van zinnen bij plaatjes door Broca afasiepatiënten: een hersenpotentiaal studie. Afasiologie, 23, 122-126.
  • Wassenaar, M., Hagoort, P., & Brown, C. M. (1997). Syntactic ERP effects in Broca's aphasics with agrammatic comprehension. Brain and Language, 60, 61-64. doi:10.1006/brln.1997.1911.
  • Watson, L. M., Wong, M. M. K., Vowles, J., Cowley, S. A., & Becker, E. B. E. (2018). A simplified method for generating purkinje cells from human-induced pluripotent stem cells. The Cerebellum, 17(4), 419-427. doi:10.1007/s12311-017-0913-2.

    Abstract

    The establishment of a reliable model for the study of Purkinje cells in vitro is of particular importance, given their central role in cerebellar function and pathology. Recent advances in induced pluripotent stem cell (iPSC) technology offer the opportunity to generate multiple neuronal subtypes for study in vitro. However, to date, only a handful of studies have generated Purkinje cells from human pluripotent stem cells, with most of these protocols proving challenging to reproduce. Here, we describe a simplified method for the reproducible generation of Purkinje cells from human iPSCs. After 21 days of treatment with factors selected to mimic the self-inductive properties of the isthmic organiser—insulin, fibroblast growth factor 2 (FGF2), and the transforming growth factor β (TGFβ)-receptor blocker SB431542—hiPSCs could be induced to form En1-positive cerebellar progenitors at efficiencies of up to 90%. By day 35 of differentiation, subpopulations of cells representative of the two cerebellar germinal zones, the rhombic lip (Atoh1-positive) and ventricular zone (Ptf1a-positive), could be identified, with the latter giving rise to cells positive for Purkinje cell progenitor-specific markers, including Lhx5, Kirrel2, Olig2 and Skor2. Further maturation was observed following dissociation and co-culture of these cerebellar progenitors with mouse cerebellar cells, with 10% of human cells staining positive for the Purkinje cell marker calbindin by day 70 of differentiation. This protocol, which incorporates modifications designed to enhance cell survival and maturation and improve the ease of handling, should serve to make existing models more accessible, in order to enable future advances in the field.

    Additional information

    12311_2017_913_MOESM1_ESM.docx
  • Waymel, A., Friedrich, P., Bastian, P.-A., Forkel, S. J., & Thiebaut de Schotten, M. (2020). Anchoring the human olfactory system within a functional gradient. NeuroImage, 216: 116863. doi:10.1016/j.neuroimage.2020.116863.

    Abstract

    Margulies et al. (2016) demonstrated the existence of at least five independent functional connectivity gradients in the human brain. However, it is unclear how these functional gradients might link to anatomy. The dual origin theory proposes that differences in cortical cytoarchitecture originate from two trends of progressive differentiation between the different layers of the cortex, referred to as the hippocampocentric and olfactocentric systems. When conceptualising the functional connectivity gradients within the evolutionary framework of the Dual Origin theory, the first gradient likely represents the hippocampocentric system anatomically. Here we expand on this concept and demonstrate that the fifth gradient likely links to the olfactocentric system. We describe the anatomy of the latter as well as the evidence to support this hypothesis. Together, the first and fifth gradients might help to model the Dual Origin theory of the human brain and inform brain models and pathologies.
  • Weber, A. (2001). Language-specific listening: The case of phonetic sequences. PhD Thesis, University of Nijmegen, Nijmegen, The Netherlands. doi:10.17617/2.68255.
  • Weber, A. (2001). Help or hindrance: How violation of different assimilation rules affects spoken-language processing. Language and Speech, 44(1), 95-118. doi:10.1177/00238309010440010401.

    Abstract

    Four phoneme-detection studies tested the conclusion from recent research that spoken-language processing is inhibited by violation of obligatory assimilation processes in the listeners’ native language. In Experiment 1, native listeners of German detected a target fricative in monosyllabic Dutch nonwords, half of which violated progressive German fricative place assimilation. In contrast to the earlier findings, listeners detected the fricative more quickly when assimilation was violated than when no violation occurred. This difference was not due to purely acoustic factors, since in Experiment 2 native Dutch listeners, presented with the same materials, showed no such effect. In Experiment 3, German listeners again detected the fricative more quickly when violation occurred in both monosyllabic and bisyllabic native nonwords, further ruling out explanations based on non-native input or on syllable structure. Finally Experiment 4 tested whether the direction in which the rule operates (progressive or regressive) controls the direction of the effect on phoneme detection responses.When regressive German place assimilation for nasals was violated, German listeners detected stops more slowly, exactly as had been observed in previous studies of regressive assimilation. It is argued that a combination of low expectations in progressive assimilation and novel popout causes facilitation of processing,whereas not fulfilling high expectations in regressive assimilation causes inhibition.
  • Weber, K., Christiansen, M., Indefrey, P., & Hagoort, P. (2019). Primed from the start: Syntactic priming during the first days of language learning. Language Learning, 69(1), 198-221. doi:10.1111/lang.12327.

    Abstract

    New linguistic information must be integrated into our existing language system. Using a novel experimental task that incorporates a syntactic priming paradigm into artificial language learning, we investigated how new grammatical regularities and words are learned. This innovation allowed us to control the language input the learner received, while the syntactic priming paradigm provided insight into the nature of the underlying syntactic processing machinery. The results of the present study pointed to facilitatory syntactic processing effects within the first days of learning: Syntactic and lexical priming effects revealed participants’ sensitivity to both novel words and word orders. This suggested that novel syntactic structures and their meaning (form–function mapping) can be acquired rapidly through incidental learning. More generally, our study indicated similar mechanisms for learning and processing in both artificial and natural languages, with implications for the relationship between first and second language learning.
  • Weber, K., Micheli, C., Ruigendijk, E., & Rieger, J. (2019). Sentence processing is modulated by the current linguistic environment and a priori information: An fMRI study. Brain and Behavior, 9(7): e01308. doi:10.1002/brb3.1308.

    Abstract

    Introduction
    Words are not processed in isolation but in rich contexts that are used to modulate and facilitate language comprehension. Here, we investigate distinct neural networks underlying two types of contexts, the current linguistic environment and verb‐based syntactic preferences.

    Methods
    We had two main manipulations. The first was the current linguistic environment, where the relative frequencies of two syntactic structures (prepositional object [PO] and double‐object [DO]) would either follow everyday linguistic experience or not. The second concerned the preference toward one or the other structure depending on the verb; learned in everyday language use and stored in memory. German participants were reading PO and DO sentences in German while brain activity was measured with functional magnetic resonance imaging.

    Results
    First, the anterior cingulate cortex (ACC) showed a pattern of activation that integrated the current linguistic environment with everyday linguistic experience. When the input did not match everyday experience, the unexpected frequent structure showed higher activation in the ACC than the other conditions and more connectivity from the ACC to posterior parts of the language network. Second, verb‐based surprisal of seeing a structure given a verb (PO verb preference but DO structure presentation) resulted, within the language network (left inferior frontal and left middle/superior temporal gyrus) and the precuneus, in increased activation compared to a predictable verb‐structure pairing.

    Conclusion
    In conclusion, (1) beyond the canonical language network, brain areas engaged in prediction and error signaling, such as the ACC, might use the statistics of syntactic structures to modulate language processing, (2) the language network is directly engaged in processing verb preferences. These two networks show distinct influences on sentence processing.

    Additional information

    Supporting information
  • Weekes, B. S., Abutalebi, J., Mak, H.-K.-F., Borsa, V., Soares, S. M. P., Chiu, P. W., & Zhang, L. (2018). Effect of monolingualism and bilingualism in the anterior cingulate cortex: a proton magnetic resonance spectroscopy study in two centers. Letras de Hoje, 53(1), 5-12. doi:10.15448/1984-7726.2018.1.30954.

    Abstract

    Reports of an advantage of bilingualism on brain structure in young adult participants
    are inconsistent. Abutalebi et al. (2012) reported more efficient monitoring of conflict during the
    Flanker task in young bilinguals compared to young monolingual speakers. The present study
    compared young adult (mean age = 24) Cantonese-English bilinguals in Hong Kong and young
    adult monolingual speakers. We expected (a) differences in metabolites in neural tissue to result
    from bilingual experience, as measured by 1H-MRS at 3T, (b) correlations between metabolic
    levels and Flanker conflict and interference effects (c) different associations in bilingual and
    monolingual speakers. We found evidence of metabolic differences in the ACC due to bilingualism,
    specifically in metabolites Cho, Cr, Glx and NAA. However, we found no significant correlations
    between metabolic levels and conflict and interference effects and no significant evidence of
    differential relationships between bilingual and monolingual speakers. Furthermore, we found no
    evidence of significant differences in the mean size of conflict and interference effects between
    groups i.e. no bilingual advantage. Lower levels of Cho, Cr, Glx and NAA in bilingual adults
    compared to monolingual adults suggest that the brains of bilinguals develop greater adaptive
    control during conflict monitoring because of their extensive bilingual experience.
  • Weissbart, H., Kandylaki, K. D., & Reichenbach, T. (2020). Cortical tracking of surprisal during continuous speech comprehension. Journal of Cognitive Neuroscience, 32, 155-166. doi:10.1162/jocn_a_01467.

    Abstract

    Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as of the precision of such a prediction. However, they have focused on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension, a listener hears many successive words whose predictability and precision vary over a large range. Here, we show that cortical activity in different frequency bands tracks word surprisal in continuous natural speech and that this tracking is modulated by precision. We obtain these results through quantifying surprisal and precision from naturalistic speech using a deep neural network and through relating these speech features to EEG responses of human volunteers acquired during auditory story comprehension. We find significant cortical tracking of surprisal at low frequencies, including the delta band as well as in the higher frequency beta and gamma bands, and observe that the tracking is modulated by the precision. Our results pave the way to further investigate the neurobiology of natural speech comprehension.
  • Whitaker, K., & Guest, O. (2020). #bropenscience is broken science: Kirstie Whitaker and Olivia Guest ask how open ‘open science’ really is. The Psychologist, 33, 34-37.
  • Wilkins, D. (2001). Eliciting contrastive use of demonstratives for objects within close personal space (all objects well within arm’s reach). In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 164-168). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Wilkins, D., Kita, S., & Enfield, N. J. (2001). Ethnography of pointing questionnaire version 2. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 136-141). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Wilkins, D. (2001). The 1999 demonstrative questionnaire: “This” and “that” in comparative perspective. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 149-163). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Willems, R. M., & Cristia, A. (2018). Hemodynamic methods: fMRI and fNIRS. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 266-287). Hoboken: Wiley.
  • Willems, R. M., Nastase, S. A., & Milivojevic, B. (2020). Narratives for Neuroscience. Trends in Neurosciences, 43(5), 271-273. doi:10.1016/j.tins.2020.03.003.

    Abstract

    People organize and convey their thoughts according to narratives. However, neuroscientists are often reluctant to incorporate narrative stimuli into their experiments. We argue that narratives deserve wider adoption in human neuroscience because they tap into the brain’s native machinery for representing the world and provide rich variability for testing hypotheses.
  • Willems, R. M., & Van Gerven, M. (2018). New fMRI methods for the study of language. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 975-991). Oxford: Oxford University Press.
  • Wilson, B., Spierings, M., Ravignani, A., Mueller, J. L., Mintz, T. H., Wijnen, F., Van der Kant, A., Smith, K., & Rey, A. (2020). Non‐adjacent dependency learning in humans and other animals. Topics in Cognitive Science, 12(3), 843-858. doi:10.1111/tops.12381.

    Abstract

    Learning and processing natural language requires the ability to track syntactic relationships between words and phrases in a sentence, which are often separated by intervening material. These nonadjacent dependencies can be studied using artificial grammar learning paradigms and structured sequence processing tasks. These approaches have been used to demonstrate that human adults, infants and some nonhuman animals are able to detect and learn dependencies between nonadjacent elements within a sequence. However, learning nonadjacent dependencies appears to be more cognitively demanding than detecting dependencies between adjacent elements, and only occurs in certain circumstances. In this review, we discuss different types of nonadjacent dependencies in language and in artificial grammar learning experiments, and how these differences might impact learning. We summarize different types of perceptual cues that facilitate learning, by highlighting the relationship between dependent elements bringing them closer together either physically, attentionally, or perceptually. Finally, we review artificial grammar learning experiments in human adults, infants, and nonhuman animals, and discuss how similarities and differences observed across these groups can provide insights into how language is learned across development and how these language‐related abilities might have evolved.
  • Winsvold, B. S., Palta, P., Eising, E., Page, C. M., The International Headache Genetics Consortium, Van den Maagdenberg, A. M. J. M., Palotie, A., & Zwart, J.-A. (2018). Epigenetic DNA methylation changes associated with headache chronification: A retrospective case-control study. Cephalalgia, 38(2), 312-322. doi:10.1177/0333102417690111.

    Abstract

    Background

    The biological mechanisms of headache chronification are poorly understood. We aimed to identify changes in DNA methylation associated with the transformation from episodic to chronic headache.
    Methods

    Participants were recruited from the population-based Norwegian HUNT Study. Thirty-six female headache patients who transformed from episodic to chronic headache between baseline and follow-up 11 years later were matched against 35 controls with episodic headache. DNA methylation was quantified at 485,000 CpG sites, and changes in methylation level at these sites were compared between cases and controls by linear regression analysis. Data were analyzed in two stages (Stages 1 and 2) and in a combined meta-analysis.
    Results

    None of the top 20 CpG sites identified in Stage 1 replicated in Stage 2 after multiple testing correction. In the combined meta-analysis the strongest associated CpG sites were related to SH2D5 and NPTX2, two brain-expressed genes involved in the regulation of synaptic plasticity. Functional enrichment analysis pointed to processes including calcium ion binding and estrogen receptor pathways.
    Conclusion

    In this first genome-wide study of DNA methylation in headache chronification several potentially implicated loci and processes were identified. The study exemplifies the use of prospectively collected population cohorts to search for epigenetic mechanisms of disease
  • Winter, B., Perlman, M., & Majid, A. (2018). Vision dominates in perceptual language: English sensory vocabulary is optimized for usage. Cognition, 179, 213-220. doi:10.1016/j.cognition.2018.05.008.

    Abstract

    Researchers have suggested that the vocabularies of languages are oriented towards the communicative needs of language users. Here, we provide evidence demonstrating that the higher frequency of visual words in a large variety of English corpora is reflected in greater lexical differentiation—a greater number of unique words—for the visual domain in the English lexicon. In comparison, sensory modalities that are less frequently talked about, particularly taste and smell, show less lexical differentiation. In addition, we show that even though sensory language can be expected to change across historical time and between contexts of use (e.g., spoken language versus fiction), the pattern of visual dominance is a stable property of the English language. Thus, we show that across the board, precisely those semantic domains that are more frequently talked about are also more lexically differentiated, for perceptual experiences. This correlation between type and token frequencies suggests that the sensory lexicon of English is geared towards communicative efficiency.
  • Wirthlin, M., Chang, E. F., Knörnschild, M., Krubitzer, L. A., Mello, C. V., Miller, C. T., Pfenning, A. R., Vernes, S. C., Tchernichovski, O., & Yartsev, M. M. (2019). A modular approach to vocal learning: Disentangling the diversity of a complex behavioral trait. Neuron, 104(1), 87-99. doi:10.1016/j.neuron.2019.09.036.

    Abstract

    Vocal learning is a behavioral trait in which the social and acoustic environment shapes the vocal repertoire of individuals. Over the past century, the study of vocal learning has progressed at the intersection of ecology, physiology, neuroscience, molecular biology, genomics, and evolution. Yet, despite the complexity of this trait, vocal learning is frequently described as a binary trait, with species being classified as either vocal learners or vocal non-learners. As a result, studies have largely focused on a handful of species for which strong evidence for vocal learning exists. Recent studies, however, suggest a continuum in vocal learning capacity across taxa. Here, we further suggest that vocal learning is a multi-component behavioral phenotype comprised of distinct yet interconnected modules. Discretizing the vocal learning phenotype into its constituent modules would facilitate integration of findings across a wider diversity of species, taking advantage of the ways in which each excels in a particular module, or in a specific combination of features. Such comparative studies can improve understanding of the mechanisms and evolutionary origins of vocal learning. We propose an initial set of vocal learning modules supported by behavioral and neurobiological data and highlight the need for diversifying the field in order to disentangle the complexity of the vocal learning phenotype.

    Files private

    Request files
  • Wittenburg, P., Lautenschlager, M., Thiemann, H., Baldauf, C., & Trilsbeek, P. (2020). FAIR Practices in Europe. Data Intelligence, 2(1-2), 257-263. doi:10.1162/dint_a_00048.

    Abstract

    Institutions driving fundamental research at the cutting edge such as for example from the Max Planck Society (MPS) took steps to optimize data management and stewardship to be able to address new scientific questions. In this paper we selected three institutes from the MPS from the areas of humanities, environmental sciences and natural sciences as examples to indicate the efforts to integrate large amounts of data from collaborators worldwide to create a data space that is ready to be exploited to get new insights based on data intensive science methods. For this integration the typical challenges of fragmentation, bad quality and also social differences had to be overcome. In all three cases, well-managed repositories that are driven by the scientific needs and harmonization principles that have been agreed upon in the community were the core pillars. It is not surprising that these principles are very much aligned with what have now become the FAIR principles. The FAIR principles confirm the correctness of earlier decisions and their clear formulation identified the gaps which the projects need to address.
  • Wnuk, E., Laophairoj, R., & Majid, A. (2020). Smell terms are not rara: A semantic investigation of odor vocabulary in Thai. Linguistics, 58(4), 937-966. doi:10.1515/ling-2020-0009.
  • Woensdregt, M., & Dingemanse, M. (2020). Other-initiated repair can facilitate the emergence of compositional language. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 474-476). Nijmegen: The Evolution of Language Conferences.
  • Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 1212-1218). Montreal, QB: Cognitive Science Society.

    Abstract

    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials.
  • Wolf, M. C., Muijselaar, M. M. L., Boonstra, A. M., & De Bree, E. H. (2019). The relationship between reading and listening comprehension: Shared and modality-specific components. Reading and Writing, 32(7), 1747-1767. doi:10.1007/s11145-018-9924-8.

    Abstract

    This study aimed to increase our understanding on the relationship between reading and listening comprehension. Both in comprehension theory and in educational practice, reading and listening comprehension are often seen as interchangeable, overlooking modality-specific aspects of them separately. Three questions were addressed. First, it was examined to what extent reading and listening comprehension comprise modality-specific, distinct skills or an overlapping, domain-general skill in terms of the amount of explained variance in one comprehension type by the opposite comprehension type. Second, general and modality-unique subskills of reading and listening comprehension were sought by assessing the contributions of the foundational skills word reading fluency, vocabulary, memory, attention, and inhibition to both comprehension types. Lastly, the practice of using either listening comprehension or vocabulary as a proxy of general comprehension was investigated. Reading and listening comprehension tasks with the same format were assessed in 85 second and third grade children. Analyses revealed that reading comprehension explained 34% of the variance in listening comprehension, and listening comprehension 40% of reading comprehension. Vocabulary and word reading fluency were found to be shared contributors to both reading and listening comprehension. None of the other cognitive skills contributed significantly to reading or listening comprehension. These results indicate that only part of the comprehension process is indeed domain-general and not influenced by the modality in which the information is provided. Especially vocabulary seems to play a large role in this domain-general part. The findings warrant a more prominent focus of modality-specific aspects of both reading and listening comprehension in research and education.
  • Wong, M. M. K., Hoekstra, S. D., Vowles, J., Watson, L. M., Fuller, G., Németh, A. H., Cowley, S. A., Ansorge, O., Talbot, K., & Becker, E. B. E. (2018). Neurodegeneration in SCA14 is associated with increased PKCγ kinase activity, mislocalization and aggregation. Acta Neuropathologica Communications, 6: 99. doi:10.1186/s40478-018-0600-7.

    Abstract

    Spinocerebellar ataxia type 14 (SCA14) is a subtype of the autosomal dominant cerebellar ataxias that is characterized by slowly progressive cerebellar dysfunction and neurodegeneration. SCA14 is caused by mutations in the PRKCG gene, encoding protein kinase C gamma (PKCγ). Despite the identification of 40 distinct disease-causing mutations in PRKCG, the pathological mechanisms underlying SCA14 remain poorly understood. Here we report the molecular neuropathology of SCA14 in post-mortem cerebellum and in human patient-derived induced pluripotent stem cells (iPSCs) carrying two distinct SCA14 mutations in the C1 domain of PKCγ, H36R and H101Q. We show that endogenous expression of these mutations results in the cytoplasmic mislocalization and aggregation of PKCγ in both patient iPSCs and cerebellum. PKCγ aggregates were not efficiently targeted for degradation. Moreover, mutant PKCγ was found to be hyper-activated, resulting in increased substrate phosphorylation. Together, our findings demonstrate that a combination of both, loss-of-function and gain-of-function mechanisms are likely to underlie the pathogenesis of SCA14, caused by mutations in the C1 domain of PKCγ. Importantly, SCA14 patient iPSCs were found to accurately recapitulate pathological features observed in post-mortem SCA14 cerebellum, underscoring their potential as relevant disease models and their promise as future drug discovery tools.

    Additional information

    additional file
  • Xiong, K., Verdonschot, R. G., & Tamaoka, K. (2020). The time course of brain activity in reading identical cognates: An ERP study of Chinese - Japanese bilinguals. Journal of Neurolinguistics, 55: 100911. doi:10.1016/j.jneuroling.2020.100911.

    Abstract

    Previous studies suggest that bilinguals' lexical access is language non-selective, especially for orthographically identical translation equivalents across languages (i.e., identical cognates). The present study investigated how such words (e.g., meaning "school" in both Chinese and Japanese) are processed in the (late) Chinese - Japanese bilingual brain. Using an L2-Japanese lexical decision task, both behavioral and electrophysiological data were collected. Reaction times (RTs), as well as the N400 component, showed that cognates are more easily recognized than non-cognates. Additionally, an early component (i.e., the N250), potentially reflecting activation at the word-form level, was also found. Cognates elicited a more positive N250 than non-cognates in the frontal region, indicating that the cognate facilitation effect occurred at an early stage of word formation for languages with logographic scripts.
  • Yang, J., Van den Bosch, A., & Frank, S. L. (2020). Less is Better: A cognitively inspired unsupervised model for language segmentation. In M. Zock, E. Chersoni, A. Lenci, & E. Santus (Eds.), Proceedings of the Workshop on the Cognitive Aspects of the Lexicon ( 28th International Conference on Computational Linguistics) (pp. 33-45). Stroudsburg: Association for Computational Linguistics.

    Abstract

    Language users process utterances by segmenting them into many cognitive units, which vary in their sizes and linguistic levels. Although we can do such unitization/segmentation easily, its cognitive mechanism is still not clear. This paper proposes an unsupervised model, Less-is-Better (LiB), to simulate the human cognitive process with respect to language unitization/segmentation. LiB follows the principle of least effort and aims to build a lexicon which minimizes the number of unit tokens (alleviating the effort of analysis) and number of unit types (alleviating the effort of storage) at the same time on any given corpus. LiB’s workflow is inspired by empirical cognitive phenomena. The design makes the mechanism of LiB cognitively plausible and the computational requirement light-weight. The lexicon generated by LiB performs the best among different types of lexicons (e.g. ground-truth words) both from an information-theoretical view and a cognitive view, which suggests that the LiB lexicon may be a plausible proxy of the mental lexicon.

    Additional information

    full text via ACL website
  • Yang, W., Chan, A., Chang, F., & Kidd, E. (2020). Four-year-old Mandarin-speaking children’s online comprehension of relative clauses. Cognition, 196: 104103. doi:10.1016/j.cognition.2019.104103.

    Abstract

    A core question in language acquisition is whether children’s syntactic processing is experience-dependent and language-specific, or whether it is governed by abstract, universal syntactic machinery. We address this question by presenting corpus and on-line processing dat a from children learning Mandarin Chinese, a language that has been important in debates about the universality of parsing processes. The corpus data revealed that two different relative clause constructions in Mandarin are differentially used to modify syntactic subjects and objects. In the experiment, 4-year-old children’s eye-movements were recorded as they listened to the two RC construction types (e.g., Can you pick up the pig that pushed the sheep?). A permutation analysis showed that children’s ease of comprehension was closely aligned with the distributional frequencies, suggesting syntactic processing preferences are shaped by the input experience of these constructions.

    Additional information

    1-s2.0-S001002771930277X-mmc1.pdf
  • Yang, J., Cai, Q., & Tian, X. (2020). How do we segment text? Two-stage chunking operation in reading. eNeuro, 7(3): ENEURO.0425-19.2020. doi:10.1523/ENEURO.0425-19.2020.

    Abstract

    Chunking in language comprehension is a process that segments continuous linguistic input into smaller chunks that are in the reader’s mental lexicon. Effective chunking during reading facilitates disambiguation and enhances efficiency for comprehension. However, the chunking mechanisms remain elusive, especially in reading given that information arrives simultaneously yet the written systems may not have explicit cues for labeling boundaries such as Chinese. What are the mechanisms of chunking that mediates the reading of the text that contains hierarchical information? We investigated this question by manipulating the lexical status of the chunks at distinct levels in four-character Chinese strings, including the two-character local chunk and four-character global chunk. Male and female human participants were asked to make lexical decisions on these strings in a behavioral experiment, followed by a passive reading task when their electroencephalography (EEG) was recorded. The behavioral results showed that the lexical decision time of lexicalized two-character local chunks was influenced by the lexical status of the four-character global chunk, but not vice versa, which indicated the processing of global chunks possessed priority over the local chunks. The EEG results revealed that familiar lexical chunks were detected simultaneously at both levels and further processed in a different temporal order – the onset of lexical access for the global chunks was earlier than that of local chunks. These consistent results suggest a two-stage operation for chunking in reading–– the simultaneous detection of familiar lexical chunks at multiple levels around 100 ms followed by recognition of chunks with global precedence.
  • Yang, J., Zhu, H., & Tian, X. (2018). Group-level multivariate analysis in EasyEEG toolbox: Examining the temporal dynamics using topographic responses. Frontiers in Neuroscience, 12: 468. doi:10.3389/fnins.2018.00468.

    Abstract

    Electroencephalography (EEG) provides high temporal resolution cognitive information from non-invasive recordings. However, one of the common practices-using a subset of sensors in ERP analysis is hard to provide a holistic and precise dynamic results. Selecting or grouping subsets of sensors may also be subject to selection bias, multiple comparison, and further complicated by individual differences in the group-level analysis. More importantly, changes in neural generators and variations in response magnitude from the same neural sources are difficult to separate, which limit the capacity of testing different aspects of cognitive hypotheses. We introduce EasyEEG, a toolbox that includes several multivariate analysis methods to directly test cognitive hypotheses based on topographic responses that include data from all sensors. These multivariate methods can investigate effects in the dimensions of response magnitude and topographic patterns separately using data in the sensor space, therefore enable assessing neural response dynamics. The concise workflow and the modular design provide user-friendly and programmer-friendly features. Users of all levels can benefit from the open-sourced, free EasyEEG to obtain a straightforward solution for efficient processing of EEG data and a complete pipeline from raw data to final results for publication.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., & Hino, Y. (2020). The influence of orthography on speech production: Evidence from masked priming in word-naming and picture-naming tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(8), 1570-1589. doi:10.1037/xlm0000829.

    Abstract

    In a masked priming word-naming task, a facilitation due to the initial-segmental sound overlap for 2-character kanji prime-target pairs was affected by certain orthographic properties (Yoshihara, Nakayama, Verdonschot, & Hino, 2017). That is, the facilitation that was due to the initial mora overlap occurred only when the mora was the whole pronunciation of their initial kanji characters (i.e., match pairs; e.g., /ka-se.ki/-/ka-rjo.ku/). When the shared initial mora was only a part of the kanji characters' readings, however, there was no facilitation (i.e., mismatch pairs; e.g., /ha.tu-a.N/-/ha.ku-bu.tu/). In the present study, we used a masked priming picture-naming task to investigate whether the previous results were relevant only when the orthography of targets is visually presented. In Experiment 1. the main findings of our word-naming task were fully replicated in a picture-naming task. In Experiments 2 and 3. the absence of facilitation for the mismatch pairs were confirmed with a new set of stimuli. On the other hand, a significant facilitation was observed for the match pairs that shared the 2 initial morae (in Experiment 4), which was again consistent with the results of our word-naming study. These results suggest that the orthographic properties constrain the phonological expression of masked priming for kanji words across 2 tasks that are likely to differ in how phonology is retrieved. Specifically, we propose that orthography of a word is activated online and constrains the phonological encoding processes in these tasks.
  • Zavala, R. (2001). Entre consejos, diablos y vendedores de caca, rasgos gramaticales deloluteco en tres de sus cuentos. Tlalocan. Revista de Fuentes para el Conocimiento de las Culturas Indígenas de México, XIII, 335-414.

    Abstract

    The three Olutec stories from Oluta, Veracruz, werenarrated by Antonio Asistente Maldonado. Roberto Zavala presents amorpheme-by-morpheme analysis of the texts with a sketch of the majorgrammatical and typological features of this language. Olutec is spoken bythree dozen speakers. The grammatical structure of this language has not beendescribed before. The sketch contains information on verb and noun morphology,verb dasses, clause types, inverse/direct patterns, grammaticalizationprocesses, applicatives, incorporation, word order type, and discontinuousexpressions. The stories presented here are the first Olutec texts everpublished. The motifs of the stories are well known throughout Middle America.The story of "the Rabbit who wants to be big" explains why one of the mainprotagonists of Middle American folktales acquired long ears. The story of "theDevil who is inebriated by the people of a village" explains how theinhabitants of a village discover the true identity of a man who likes to dancehuapango and decide to get rid of him. Finally the story of "theshit-sellers" presents two compadres, one who is lazy and the otherone who works hard. The hard-worker asks the lazy compadre how he surviveswithout working. The latter lies to to him that he sells shit in theneighboring village. The hard-working compadre decides to become a shit-sellerand in the process realizes that the lazy compadre deceived him. However, he islucky and meets with the Devil who offers him money in compensation for havingbeen deceived. When the lazy compadre realizes that the hard-working compadrehas become rich, he tries to do the same business but gets beaten in theprocess.
  • Zavala, R. (1997). Functional analysis of Akatek voice constructions. International Journal of American Linguistics, 63(4), 439-474.

    Abstract

    L'A. étudie les corrélations entre structure syntaxique et fonction pragmatique dans les alternances de voix en akatek, une langue maya appartenant au sous-groupe Q'anjob'ala. Les alternances pragmatiques de voix sont les mécanismes par lesquels les langues encodent les différents degrés de topicalité des deux principaux participants d'un événement sémantiquement transitif, l'agent et le patient. A l'aide d'une analyse quantitative, l'A. évalue la topicalité de ces participants et identifie les structures syntaxiques permettant d'exprimer les quatre principales fonctions de voix en akatek : active-directe, inverse, passive et antipassive
  • Zhang, Y., Chen, C.-h., & Yu, C. (2019). Mechanisms of cross-situational learning: Behavioral and computational evidence. In Advances in Child Development and Behavior; vol. 56 (pp. 37-63).

    Abstract

    Word learning happens in everyday contexts with many words and many potential referents for those words in view at the same time. It is challenging for young learners to find the correct referent upon hearing an unknown word at the moment. This problem of referential uncertainty has been deemed as the crux of early word learning (Quine, 1960). Recent empirical and computational studies have found support for a statistical solution to the problem termed cross-situational learning. Cross-situational learning allows learners to acquire word meanings across multiple exposures, despite each individual exposure is referentially uncertain. Recent empirical research shows that infants, children and adults rely on cross-situational learning to learn new words (Smith & Yu, 2008; Suanda, Mugwanya, & Namy, 2014; Yu & Smith, 2007). However, researchers have found evidence supporting two very different theoretical accounts of learning mechanisms: Hypothesis Testing (Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005; Markman, 1992) and Associative Learning (Frank, Goodman, & Tenenbaum, 2009; Yu & Smith, 2007). Hypothesis Testing is generally characterized as a form of learning in which a coherent hypothesis regarding a specific word-object mapping is formed often in conceptually constrained ways. The hypothesis will then be either accepted or rejected with additional evidence. However, proponents of the Associative Learning framework often characterize learning as aggregating information over time through implicit associative mechanisms. A learner acquires the meaning of a word when the association between the word and the referent becomes relatively strong. In this chapter, we consider these two psychological theories in the context of cross-situational word-referent learning. By reviewing recent empirical and cognitive modeling studies, our goal is to deepen our understanding of the underlying word learning mechanisms by examining and comparing the two theoretical learning accounts.
  • Zhang, Y., Amatuni, A., Crain, E., & Yu, C. (2020). Seeking meaning: Examining a cross-situational solution to learn action verbs using human simulation paradigm. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 2854-2860). Montreal, QB: Cognitive Science Society.

    Abstract

    To acquire the meaning of a verb, language learners not only need to find the correct mapping between a specific verb and an action or event in the world, but also infer the underlying relational meaning that the verb encodes. Most verb naming instances in naturalistic contexts are highly ambiguous as many possible actions can be embedded in the same scenario and many possible verbs can be used to describe those actions. To understand whether learners can find the correct verb meaning from referentially ambiguous learning situations, we conducted three experiments using the Human Simulation Paradigm with adult learners. Our results suggest that although finding the right verb meaning from one learning instance is hard, there is a statistical solution to this problem. When provided with multiple verb learning instances all referring to the same verb, learners are able to aggregate information across situations and gradually converge to the correct semantic space. Even in cases where they may not guess the exact target verb, they can still discover the right meaning by guessing a similar verb that is semantically close to the ground truth.
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2020). Language selection contributes to intrusion errors in speaking: Evidence from picture naming. Bilingualism: Language and Cognition, 23, 788-800. doi:10.1017/S1366728919000683.

    Abstract

    Bilinguals usually select the right language to speak for the particular context they are in, but sometimes the nontarget language intrudes. Despite a large body of research into language selection and language control, it remains unclear where intrusion errors originate from. These errors may be due to incorrect selection of the nontarget language at the conceptual level, or be a consequence of erroneous word selection (despite correct language selection) at the lexical level. We examined the former possibility in two language switching experiments using a manipulation that supposedly affects language selection on the conceptual level, namely whether the conversational language context was associated with the target language (congruent) or with the alternative language (incongruent) on a trial. Both experiments showed that language intrusion errors occurred more often in incongruent than in congruent contexts, providing converging evidence that language selection during concept preparation is one driving force behind language intrusion.
  • Zheng, X. (2020). Control and monitoring in bilingual speech production: Language selection, switching and intrusion. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Zheng, X., Roelofs, A., Erkan, H., & Lemhöfer, K. (2020). Dynamics of inhibitory control during bilingual speech production: An electrophysiological study. Neuropsychologia, 140: 107387. doi:10.1016/j.neuropsychologia.2020.107387.

    Abstract

    Bilingual speakers have to control their languages to avoid interference, which may be achieved by enhancing the target language and/or inhibiting the nontarget language. Previous research suggests that bilinguals use inhibition (e.g., Jackson et al., 2001), which should be reflected in the N2 component of the event-related potential (ERP) in the EEG. In the current study, we investigated the dynamics of inhibitory control by measuring the N2 during language switching and repetition in bilingual picture naming. Participants had to name pictures in Dutch or English depending on the cue. A run of same-language trials could be short (two or three trials) or long (five or six trials). We assessed whether RTs and N2 changed over the course of same-language runs, and at a switch between languages. Results showed that speakers named pictures more quickly late as compared to early in a run of same-language trials. Moreover, they made a language switch more quickly after a long run than after a short run. This run-length effect was only present in the first language (L1), not in the second language (L2). In ERPs, we observed a widely distributed switch effect in the N2, which was larger after a short run than after a long run. This effect was only present in the L2, not in the L1, although the difference was not significant between languages. In contrast, the N2 was not modulated during a same-language run. Our results suggest that the nontarget language is inhibited at a switch, but not during the repeated use of the target language.

    Additional information

    Data availability

    Files private

    Request files
  • Zheng, X., Roelofs, A., Farquhar, J., & Lemhöfer, K. (2018). Monitoring of language selection errors in switching: Not all about conflict. PLoS One, 13(11): e0200397. doi:10.1371/journal.pone.0200397.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. To investigate how bilinguals monitor their speech errors and control their languages in use, we recorded event-related potentials (ERPs) in unbalanced Dutch-English bilingual speakers in a cued language-switching task. We tested the conflict-based monitoring model of Nozari and colleagues by investigating the error-related negativity (ERN) and comparing the effects of the two switching directions (i.e., to the first language, L1 vs. to the second language, L2). Results show that the speakers made more language selection errors when switching from their L2 to the L1 than vice versa. In the EEG, we observed a robust ERN effect following language selection errors compared to correct responses, reflecting monitoring of speech errors. Most interestingly, the ERN effect was enlarged when the speakers were switching to their L2 (less conflict) compared to switching to the L1 (more conflict). Our findings do not support the conflict-based monitoring model. We discuss an alternative account in terms of error prediction and reinforcement learning.
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2018). Language selection errors in switching: language priming or cognitive control? Language, Cognition and Neuroscience, 33(2), 139-147. doi:10.1080/23273798.2017.1363401.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. We examined the relative contribution of top-down cognitive control and bottom-up language priming to these errors. Unbalanced Dutch-English bilinguals named pictures and were cued to switch between languages under time pressure. We also manipulated the number of same-language trials before a switch (long vs. short runs). Results show that speakers made more language selection errors when switching from their second language (L2) to the first language (L1) than vice versa. Furthermore, they made more errors when switching to the L1 after a short compared to a long run of L2 trials. In the reverse switching direction (L1 to L2), run length had no effect. These findings are most compatible with an account of language selection errors that assigns a strong role to top-down processes of cognitive control.

    Additional information

    plcp_a_1363401_sm2537.docx
  • Zheng, X., & Lemhöfer, K. (2019). The “semantic P600” in second language processing: When syntax conflicts with semantics. Neuropsychologia, 127, 131-147. doi:10.1016/j.neuropsychologia.2019.02.010.

    Abstract

    In sentences like “the mouse that chased the cat was hungry”, the syntactically correct interpretation (the mouse chases the cat) is contradicted by semantic and pragmatic knowledge. Previous research has shown that L1 speakers sometimes base sentence interpretation on this type of knowledge (so-called “shallow” or “good-enough” processing). We made use of both behavioural and ERP measurements to investigate whether L2 learners differ from native speakers in the extent to which they engage in “shallow” syntactic processing. German learners of Dutch as well as Dutch native speakers read sentences containing relative clauses (as in the example above) for which the plausible thematic roles were or were not reversed, and made plausibility judgments. The results show that behaviourally, L2 learners had more difficulties than native speakers to discriminate plausible from implausible sentences. In the ERPs, we replicated the previously reported finding of a “semantic P600” for semantic reversal anomalies in native speakers, probably reflecting the effort to resolve the syntax-semantics conflict. In L2 learners, though, this P600 was largely attenuated and surfaced only in those trials that were judged correctly for plausibility. These results generally point at a more prevalent, but not exclusive occurrence of shallow syntactic processing in L2 learners.
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Zimianiti, E. (2020). Verb production and comprehension in dementia: A verb argument structure approach. Master Thesis, Aristotle University of Thessaloniki, Thessaloniki, Greece.

    Abstract

    The purpose of this study is to shed light to the linguistic deficit in populations with dementia, and more specifically with Mild Cognitive Impairment and Alzheimer’s Disease; by examining the assignment of thematic roles (θ-roles) in sentences including psychological verbs.
    The interest in types of dementia and its precursor is due to the relevance of the disease in present-day world society (Caloi, 2017). 47 millions of people worldwide were reported by the World Alzheimer Report in 2016 (Prince et al. 2016) as people with a type of dementia. This number surpasses the number of inhabitants in Spain, a whole country, and it is expected, according to the report, to triplicate until 2050 reaching the number of 131 million. The impact of this disease is observed not only at the social level but also in the economic one, because of their need for assistance in their everyday life. What is worrying, is the lack of total treatment once the disease has started. Despite the efforts of medicine, dementia is problematic in terms of its diagnosis, because a variety of cognitive abilities is assessed in combination with medical workup. Language is a crucial component in the procedure of diagnosis as linguistic deficits are among the first symptoms that accompany the onset of the disease. Therefore, further investigation of linguistic impairment is a necessity in order to enhance the diagnostic techniques used nowadays. Furthermore, the lack of efficient drugs for the treatment of the disease has necessitated the development of training programs for maintenance and increase of the cognitive abilities in people with either Mild Cognitive Impairment or a type of dementia …
  • Zinken, J., Rossi, G., & Reddy, V. (2020). Doing more than expected: Thanking recognizes another's agency in providing assistance. In C. Taleghani-Nikazm, E. Betz, & P. Golato (Eds.), Mobilizing others: Grammar and lexis within larger activities (pp. 253-278). Amsterdam: John Benjamins.

    Abstract

    In informal interaction, speakers rarely thank a person who has complied with a request. Examining data from British English, German, Italian, Polish, and Telugu, we ask when speakers do thank after compliance. The results show that thanking treats the other’s assistance as going beyond what could be taken for granted in the circumstances. Coupled with the rareness of thanking after requests, this suggests that cooperation is to a great extent governed by expectations of helpfulness, which can be long-standing, or built over the course of a particular interaction. The higher frequency of thanking in some languages (such as English or Italian) suggests that cultures differ in the importance they place on recognizing the other’s agency in doing as requested.
  • Zoefel, B., Ten Oever, S., & Sack, A. T. (2018). The involvement of endogenous neural oscillations in the processing of rhythmic input: More than a regular repetition of evoked neural responses. Frontiers in Neuroscience, 12: 95. doi:10.3389/fnins.2018.00095.

    Abstract

    It is undisputed that presenting a rhythmic stimulus leads to a measurable brain response that follows the rhythmic structure of this stimulus. What is still debated, however, is the question whether this brain response exclusively reflects a regular repetition of evoked responses, or whether it also includes entrained oscillatory activity. Here we systematically present evidence in favor of an involvement of entrained neural oscillations in the processing of rhythmic input while critically pointing out which questions still need to be addressed before this evidence could be considered conclusive. In this context, we also explicitly discuss the potential functional role of such entrained oscillations, suggesting that these stimulus-aligned oscillations reflect, and serve as, predictive processes, an idea often only implicitly assumed in the literature.
  • Zora, H., Rudner, M., & Montell Magnusson, A. (2020). Concurrent affective and linguistic prosody with the same emotional valence elicits a late positive ERP response. European Journal of Neuroscience, 51(11), 2236-2249. doi:10.1111/ejn.14658.

    Abstract

    Change in linguistic prosody generates a mismatch negativity response (MMN), indicating neural representation of linguistic prosody, while change in affective prosody generates a positive response (P3a), reflecting its motivational salience. However, the neural response to concurrent affective and linguistic prosody is unknown. The present paper investigates the integration of these two prosodic features in the brain by examining the neural response to separate and concurrent processing by electroencephalography (EEG). A spoken pair of Swedish words—[ˈfɑ́ːsɛn] phase and [ˈfɑ̀ːsɛn] damn—that differed in emotional semantics due to linguistic prosody was presented to 16 subjects in an angry and neutral affective prosody using a passive auditory oddball paradigm. Acoustically matched pseudowords—[ˈvɑ́ːsɛm] and [ˈvɑ̀ːsɛm]—were used as controls. Following the constructionist concept of emotions, accentuating the conceptualization of emotions based on language, it was hypothesized that concurrent affective and linguistic prosody with the same valence—angry [ˈfɑ̀ːsɛn] damn—would elicit a unique late EEG signature, reflecting the temporal integration of affective voice with emotional semantics of prosodic origin. In accordance, linguistic prosody elicited an MMN at 300–350 ms, and affective prosody evoked a P3a at 350–400 ms, irrespective of semantics. Beyond these responses, concurrent affective and linguistic prosody evoked a late positive component (LPC) at 820–870 ms in frontal areas, indicating the conceptualization of affective prosody based on linguistic prosody. This study provides evidence that the brain does not only distinguish between these two functions of prosody but also integrates them based on language and experience.
  • Zora, H., Riad, T., & Ylinen, S. (2019). Prosodically controlled derivations in the mental lexicon. Journal of Neurolinguistics, 52: 100856. doi:10.1016/j.jneuroling.2019.100856.

    Abstract

    Swedish morphemes are classified as prosodically specified or prosodically unspecified, depending on lexical or phonological stress, respectively. Here, we investigate the allomorphy of the suffix -(i)sk, which indicates the distinction between lexical and phonological stress; if attached to a lexically stressed morpheme, it takes a non-syllabic form (-sk), whereas if attached to a phonologically stressed morpheme, an epenthetic vowel is inserted (-isk). Using mismatch negativity (MMN), we explored the neural processing of this allomorphy across lexically stressed and phonologically stressed morphemes. In an oddball paradigm, participants were occasionally presented with congruent and incongruent derivations, created by the suffix -(i)sk, within the repetitive presentation of their monomorphemic stems. The results indicated that the congruent derivation of the lexically stressed stem elicited a larger MMN than the incongruent sequences of the same stem and the derivational suffix, whereas after the phonologically stressed stem a non-significant tendency towards an opposite pattern was observed. We argue that the significant MMN response to the congruent derivation in the lexical stress condition is in line with lexical MMN, indicating a holistic processing of the sequence of lexically stressed stem and derivational suffix. The enhanced MMN response to the incongruent derivation in the phonological stress condition, on the other hand, is suggested to reflect combinatorial processing of the sequence of phonologically stressed stem and derivational suffix. These findings bring a new aspect to the dual-system approach to neural processing of morphologically complex words, namely the specification of word stress.
  • Zormpa, E. (2020). Memory for speaking and listening. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Slow naming of pictures facilitates memory for their names. Psychonomic Bulletin & Review, 26(5), 1675-1682. doi:10.3758/s13423-019-01620-x.

    Abstract

    Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language.
  • Zormpa, E., Brehm, L., Hoedemaker, R. S., & Meyer, A. S. (2019). The production effect and the generation effect improve memory in picture naming. Memory, 27(3), 340-352. doi:10.1080/09658211.2018.1510966.

    Abstract

    The production effect (better memory for words read aloud than words read silently) and the picture superiority effect (better memory for pictures than words) both improve item memory in a picture naming task (Fawcett, J. M., Quinlan, C. K., & Taylor, T. L. (2012). Interplay of the production and picture superiority effects: A signal detection analysis. Memory (Hove, England), 20(7), 655–666. doi:10.1080/09658211.2012.693510). Because picture naming requires coming up with an appropriate label, the generation effect (better memory for generated than read words) may contribute to the latter effect. In two forced-choice memory experiments, we tested the role of generation in a picture naming task on later recognition memory. In Experiment 1, participants named pictures silently or aloud with the correct name or an unreadable label superimposed. We observed a generation effect, a production effect, and an interaction between the two. In Experiment 2, unreliable labels were included to ensure full picture processing in all conditions. In this experiment, we observed a production and a generation effect but no interaction, implying the effects are dissociable. This research demonstrates the separable roles of generation and production in picture naming and their impact on memory. As such, it informs the link between memory and language production and has implications for memory asymmetries between language production and comprehension.

    Additional information

    pmem_a_1510966_sm9257.pdf
  • Zuidema, W., French, R. M., Alhama, R. G., Ellis, K., O'Donnell, T. J. O., Sainburgh, T., & Gentner, T. Q. (2020). Five ways in which computational modeling can help advance cognitive science: Lessons from artificial grammar learning. Topics in Cognitive Science, 12(3), 925-941. doi:10.1111/tops.12474.

    Abstract

    There is a rich tradition of building computational models in cognitive science, but modeling, theoretical, and experimental research are not as tightly integrated as they could be. In this paper, we show that computational techniques—even simple ones that are straightforward to use—can greatly facilitate designing, implementing, and analyzing experiments, and generally help lift research to a new level. We focus on the domain of artificial grammar learning, and we give five concrete examples in this domain for (a) formalizing and clarifying theories, (b) generating stimuli, (c) visualization, (d) model selection, and (e) exploring the hypothesis space.
  • Zuidema, W., & Fitz, H. (2019). Key issues and future directions: Models of human language and speech processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 353-358). Cambridge, MA: MIT Press.

Share this page